Compare commits

..

2 Commits

Author SHA1 Message Date
nolash
210a4ef7c1 Replace defaults 2021-02-01 21:36:14 +01:00
nolash
d7f10d54d1 Environment revisions 2021-02-01 21:22:37 +01:00
383 changed files with 849 additions and 19794 deletions

4
.dockerignore Normal file
View File

@@ -0,0 +1,4 @@
.git
.cache
.dot
**/doc

2
.gitignore vendored
View File

@@ -1,2 +0,0 @@
service-configs/*
!service-configs/.gitkeep

View File

@@ -1,10 +1,13 @@
variables:
GIT_SUBMODULE_STRATEGY: recursive
before_script:
- git submodule sync --recursive
- git submodule update --init --recursive
include:
- local: 'ci_templates/.cic-template.yml'
- local: 'apps/contract-migration/.gitlab-ci.yml'
- local: 'apps/cic-eth/.gitlab-ci.yml'
- local: 'apps/cic-ussd/.gitlab-ci.yml'
- local: 'apps/cic-notify/.gitlab-ci.yml'
- local: 'apps/cic-meta/.gitlab-ci.yml'
# - local: 'apps/cic-eth/.gitlab-ci.yml'
stages:
- build

12
.gitmodules vendored
View File

@@ -1,3 +1,15 @@
[submodule "apps/cic-ussd"]
path = apps/cic-ussd
url = git@gitlab.com:grassrootseconomics/cic-ussd.git
[submodule "apps/cic-notify"]
path = apps/cic-notify
url = git@gitlab.com:grassrootseconomics/cic-notify.git
[submodule "apps/cic-cache"]
path = apps/cic-cache
url = git@gitlab.com:grassrootseconomics/cic-cache.git
[submodule "apps/cic-meta"]
path = apps/cic-meta
url = git@gitlab.com:grassrootseconomics/cic-meta.git
[submodule "apps/bloxbergValidatorSetup"]
path = apps/bloxbergValidatorSetup
url = git@gitlab.com:grassrootseconomics/bloxbergValidatorSetup.git

View File

@@ -2,22 +2,6 @@
## Getting started
## Make some keys
```
docker build -t bloxie . && docker run -v "$(pwd)/keys:/root/keys" --rm -it -t bloxie account new --chain /root/bloxberg.json --keys-path /root/keys
```
### Prepare the repo
This is stuff we need to put in makefile but for now...
File mounts and permisssions need to be set
```
chmod -R 755 scripts/initdb apps/cic-meta/scripts/initdb
````
start cluster
```
docker-compose up
@@ -35,8 +19,5 @@ docker-compose down -v
rebuild an images
```
docker-compose up --build <service_name>
```
Deployment variables are writtend to service-configs/.env after everthing is up.
docker-compose up -d --no-deps --build <service_name>
```

View File

@@ -1,6 +0,0 @@
/validator/bloxbergData
/validator/bloxberg.log
keys/*
!keys/Bloxberg
keys/Bloxberg/*
!keys/Bloxberg/UTC--2021-02-10T16-57-35Z--03512a62-5334-20cc-4e44-71156f33cff6

View File

@@ -1,28 +0,0 @@
FROM parity/parity:v2.5.13-stable
# root user for installing os dep's and setting file permissions
# RUN apt-get update && sudo apt-get -y install sed
USER root
WORKDIR /root
# ARG BASE_PATH=root/.local/share/io.parity.ethereum/
ARG KEY_PATH=/root/keys/
# mount a key volume locally if you want to persist keys between runs
# to generate new account + keys run:
#
RUN mkdir -p $KEY_PATH
COPY ./validator/bloxberg.json \
./validator/bootnodes.txt \
./validator/validator.pwd \
./validator/validator.toml \
/root/
COPY keys/ /root/keys/
# RUN chown -R parity:parity $HOME/ && \
# chmod -R 775 $HOME/ && \
# chmod g+s $HOME/
# USER parity
ENTRYPOINT [ "parity" ]
CMD [ "--config", "/root/validator.toml", "--keys-path", "/root/keys/", "--password", "/root/validator.pwd" ]

View File

@@ -1,62 +0,0 @@
# Refactored bloxberg node
The original bloxberg node config was kind of annoying so I am running it more like vanilla parity. This way you can pass command flags directly to parity.
## Make some keys
```
docker build -t bloxie . && docker run -v ${PWD}/keys:/root/keys --rm -it -t bloxie account new --chain /root/bloxberg.json --keys-path /root/keys --password /root/validator.pwd
```
## Enter the signer address and passwords in the config files
validator.toml
```
[mining]
#CHANGE ENGINE SIGNER TO VALIDATOR ADDRESS
engine_signer = "0x0a7cac94bcd82ced7cd67651246dcb6a47553d53" <---- address goes here
reseal_on_txs = "none"
force_sealing = true
min_gas_price = 1000000
gas_floor_target = "10000000"
```
validator.pwd <--- put your password in here or leave it blank if you didn't enter a password.
## Run it!
Mount the keys folder on your host if you want to preserve the keys between runs...
```
docker run -v ${PWD}/keys:/root/keys -it -t bloxie
```
---
# bloxbergValidatorSetup
This is a Docker image for running a validator node on the bloxberg blockchain.
Remote Machine Minimum System Requirements:
* Ubuntu 16.04 or 18.04 Image (Other Operating Systems can work, but commands may have to be altered)
* Minimum 2 CPU
* Minimum 2GB RAM Memory
* We recommend for future proofing at least 100 GB of SSD storage.
These are simply the minimum requirements and we do recommend to allocate more resources to ensure stability.
With the latest update to parity 2.7, it is also necessary for your server CPU to support aes. This can be found by running:
```
cat /proc/cpuinfo
```
on the server and checking in the flags column for aes.
Additionally, the blockchain connects to other nodes via port 30303, so it is important this port is open via your firewall beforehand.
In the `docker-compose.yml` you will also see the ports 8545 (JSON-RPC API) and 8546 (Web-Socket). These can be used to interact with blockchain via means of your local node but don't need to be accesible over the internet.
## Setup Process
1. Clone the repository to the server you are hosting the validator node.
2. Edit the validator.yml with a text editor (nano or vim) and change the NATIP variable to your external IP. Save this file
3. Edit the `validator/validator.pwd` file and insert a secure password. This will be used to encrypt your private key.
4. Run 'sudo ./setup.sh'.
5. Run 'docker-compose -f validator.yml up'. This will start the docker container and generate an ethereum address and an enode address. Send these both to the bloxberg consortium.
6. Use Ctrl+C to shut down the docker container. Lastly, run 'docker-compose -f validator.yml up -d'. This will daemonize the container and start running the validator node in the background.

View File

@@ -1 +0,0 @@
{"id":"03512a62-5334-20cc-4e44-71156f33cff6","version":3,"crypto":{"cipher":"aes-128-ctr","cipherparams":{"iv":"dc388338c4d4e3203604aeb3d1c6bbfa"},"ciphertext":"8a945775b87089ce94537e011799f3abc1577c5dd1f3fbaebe1cd96dfdfc8b5a","kdf":"pbkdf2","kdfparams":{"c":10240,"dklen":32,"prf":"hmac-sha256","salt":"e8585836540caca01282381f5c1fe128e53b15b40f9d152fbc5a4f82a7967398"},"mac":"a7c7815e84a632ecf6d8f18c981bea73d50cd2e2a855a3e90477fc84ed14f906"},"address":"4f2a5902158c3969b245247f4154971d393301f2","name":"","meta":"{}"}

View File

@@ -1,27 +0,0 @@
#!/bin/bash
check_packages() {
if [ $(grep -i debian /etc/*-release | wc -l) -gt 0 ]; then
if [ ! -f /usr/bin/docker ]; then
echo "INSTALLING DOCKER"
sudo apt-get install -y docker-ce
fi
if [ ! -f /usr/local/bin/docker-compose ]; then
echo "INSTALLING DOCKER-COMPOSE"
sudo apt-get install docker-compose
fi
fi
#Chrony configuration
sudo apt install chrony
cat src/chronyConfig.tpl > /etc/chrony/chrony.conf
sudo service chrony restart
sudo systemctl enable chrony
sudo chown -R 1000:1000 ./validator
}
check_packages

View File

@@ -1,45 +0,0 @@
# Welcome to the chrony configuration file. See chrony.conf(5) for more
# information about usuable directives.
# This will use (up to):
# - 4 sources from ntp.ubuntu.com which some are ipv6 enabled
# - 2 sources from 2.ubuntu.pool.ntp.org which is ipv6 enabled as well
# - 1 source from [01].ubuntu.pool.ntp.org each (ipv4 only atm)
# This means by default, up to 6 dual-stack and up to 2 additional IPv4-only
# sources will be used.
# At the same time it retains some protection against one of the entries being
# down (compare to just using one of the lines). See (LP: #1754358) for the
# discussion.
#
# About using servers from the NTP Pool Project in general see (LP: #104525).
# Approved by Ubuntu Technical Board on 2011-02-08.
# See http://www.pool.ntp.org/join.html for more information.
pool ntp.ubuntu.com iburst maxsources 4
pool 0.ubuntu.pool.ntp.org iburst maxsources 1
pool 1.ubuntu.pool.ntp.org iburst maxsources 1
pool 2.ubuntu.pool.ntp.org iburst maxsources 2
# This directive specify the location of the file containing ID/key pairs for
# NTP authentication.
keyfile /etc/chrony/chrony.keys
# This directive specify the file into which chronyd will store the rate
# information.
driftfile /var/lib/chrony/chrony.drift
# Uncomment the following line to turn logging on.
#log tracking measurements statistics
# Log files location.
logdir /var/log/chrony
# Stop bad estimates upsetting machine clock.
maxupdateskew 100.0
# This directive enables kernel synchronisation (every 11 minutes) of the
# real-time clock. Note that it cant be used along with the 'rtcfile' directive.
rtcsync
# Step the system clock instead of slewing it if the adjustment is larger than
# one second, but only in the first three clock updates.
makestep 1 3

View File

@@ -1,21 +0,0 @@
version: '2'
services:
validator:
build:
context: ./src
dockerfile: Dockerfile
container_name: validator
volumes:
- ./validator:/home/parity/.local/share/io.parity.ethereum
restart: unless-stopped
entrypoint: /start.sh
environment:
#CHANGE NAT_IP to your IP
NAT_IP: 130.183.206.234
#If you have multiple addresses, you can specify one address for the authority node.
#AUTH_ADDRESS: 0xab59a1ea1ac9af9f77518b9b4ad80942ade35088
user: "0:0"
ports:
- "8545:8545"
- "8546:8546"
- "30303:30303"

View File

@@ -1,126 +0,0 @@
{
"name": "Bloxberg",
"engine": {
"authorityRound": {
"params": {
"maximumUncleCountTransition": 5006743,
"maximumUncleCount": 0,
"stepDuration": "5",
"validators" : {
"list": ["0x4f2a5902158c3969b245247f4154971d393301f2"]
}
}
}
},
"params": {
"gasLimitBoundDivisor": "0x400",
"maximumExtraDataSize": "0x20",
"minGasLimit": "0x7A1200",
"networkID" : "0x2324",
"eip140Transition": "0x0",
"eip211Transition": "0x0",
"eip214Transition": "0x0",
"eip658Transition": "0x0",
"eip145Transition": 5006743,
"eip1014Transition": 5006743,
"eip1052Transition": 5006743,
"eip1283Transition": 5006743,
"eip1344Transition": 5006743,
"eip1706Transition": 5006743,
"eip1884Transition": 5006743,
"eip2028Transition": 5006743
},
"genesis": {
"seal": {
"authorityRound": {
"step": "0x0",
"signature": "0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
}
},
"difficulty": "0x20000",
"gasLimit": "0x7A1200"
},
"accounts": {
"0x0000000000000000000000000000000000000001": { "balance": "1", "builtin": { "name": "ecrecover", "pricing": { "linear": { "base": 3000, "word": 0 } } } },
"0x0000000000000000000000000000000000000002": { "balance": "1", "builtin": { "name": "sha256", "pricing": { "linear": { "base": 60, "word": 12 } } } },
"0x0000000000000000000000000000000000000003": { "balance": "1", "builtin": { "name": "ripemd160", "pricing": { "linear": { "base": 600, "word": 120 } } } },
"0x0000000000000000000000000000000000000004": { "balance": "1", "builtin": { "name": "identity", "pricing": { "linear": { "base": 15, "word": 3 } } } },
"0x0000000000000000000000000000000000000005": { "builtin": { "name": "modexp", "activate_at": 0, "pricing": { "modexp": { "divisor": 20 } } } },
"0x0000000000000000000000000000000000000006": {
"builtin": {
"name": "alt_bn128_add",
"activate_at": 0,
"pricing": {
"alt_bn128_const_operations": {
"price": 500
}
}
}
},
"0000000000000000000000000000000000000007": {
"builtin": {
"name": "alt_bn128_mul",
"pricing": {
"0": {
"price": {
"alt_bn128_const_operations": {
"price": 40000
}
}
},
"5006743": {
"info": "Istanbul HF",
"price": {
"alt_bn128_const_operations": {
"price": 6000
}
}
}
}
}
},
"0000000000000000000000000000000000000008": {
"builtin": {
"name": "alt_bn128_pairing",
"pricing": {
"0": {
"price": {
"alt_bn128_pairing": {
"base": 100000,
"pair": 80000
}
}
},
"5006743": {
"info": "Istanbul HF",
"price": {
"alt_bn128_pairing": {
"base": 45000,
"pair": 34000
}
}
}
}
}
},
"0x0000000000000000000000000000000000000009": {
"builtin": {
"name": "blake2_f",
"pricing": {
"5006743": {
"info": "Istanbul HF",
"price": {
"blake2_f": {
"gas_per_round": 1
}
}
}
}
}
},
"0xEb3907eCad74a0013c259D5874AE7f22DcBcC95C": { "balance": "102000000000000000000000000000000" }
}
}

View File

@@ -1,16 +0,0 @@
#MPDL Bootnode and Authority
enode://a7a53baf91b612b25b84993c964beb987879bfe7430cf6acb55bd721b9c0d96ceb1849049b1dcc0aa6e86fa1e2234280581b16c1265d56644fb09085e6906034@130.183.206.234:30304
enode://16287e6f87719b4f58f9598d74c82f2e4cebf2f3064a59659d9a553c5653864407fe5654beab39476a4e209725a63bf4c808d5340dd23989b15754294fe4a845@141.5.98.231:30303
enode://e6b181c16d20194029c220ce886fdc7a745cb37ee655c3b41ea744ec89143db6731a1c01ff3c40b39f969079090ad34e0e3319e47b0d22a8d510ff1f7b5a9ac7@130.183.206.234:30303
#GeorgiaTech
enode://4d9e6925ef3a92315283a655e856aa29dd516172c4f38d2a8fcd58c233a2cd80c57b507fed3bf351b1ac0611e8c7fefd6fb1c49de2d0d15eb1816d43629ac4ba@3.14.148.213:30303
#CMU
enode://ce0154eb13c1c038017151dd1ff4d736178ffedc33f5e11fe694c247eb09279886d253c3c775486eb709a65057901e2788098f991c58e6ad26ff957a8f45253e@128.2.25.89:30303
#UCL
enode://e41a38d659f13d47f3d88c5178e0cfe97487d3568000b85ae3a4abbcc35404d2628cee8a7e9071b63802542bafd886447ecf1d02fc663be0534779094a3e4fd1@128.16.12.165:30303
#Sarajevo
enode://a246cbc6a66ccf0447c74e108211433849f8035e2c89a598b5dd19cbb5fc49fcadae7693c2bcf297b4d12158cbd587a4ca6b1c087c678d7c7167641a7cff432e@195.222.43.21:30303
#Zurich
enode://6173beaabd1a82d41e3615da2a755e99f3bd53e04737e2ae2f02a004c42445d8dfd1d87aadfafabc4c45a1df2a80f359ab628c93522d1dac70690a9689912bbc@129.132.178.74:30303
#Internet Security
enode://bc50cf41d29f346f43f84ee7d03b21cd2d4176cd759cd0d26ce04c16448d4c8611c4eab4c5543e29075c758c0afc2fd6743fa38f48dc0ed1f016efbb5c5a7654@194.94.127.78:30303

View File

@@ -1,36 +0,0 @@
[parity]
chain = "/root/bloxberg.json"
auto_update = "all"
[network]
port = 30303
reserved_peers = "/root/bootnodes.txt"
nat = "any"
discovery = false
[rpc]
port = 8545
apis = ["web3", "eth", "net", "personal", "parity", "parity_set", "traces", "rpc", "parity_accounts"]
interface = "all"
[websockets]
disable = false
port = 8546
#apis = ["web3", "eth", "net", "personal", "parity", "parity_set", "traces", "rpc", "parity_accounts"]
apis = ["all"]
interface = "all"
origins = ["*"]
[account]
password = ["/root/validator.pwd"]
[mining]
#CHANGE ENGINE SIGNER TO VALIDATOR ADDRESS
engine_signer = "0x4f2a5902158c3969b245247f4154971d393301f2"
reseal_on_txs = "none"
force_sealing = true
min_gas_price = 1000000
gas_floor_target = "10000000"
[footprint]
tracing = "off"

View File

@@ -1,22 +1,11 @@
.cic_eth_variables:
variables:
APP_NAME: cic-eth
DOCKERFILE_PATH: $APP_NAME/docker/Dockerfile
.cic_eth_changes_target:
.contract-migration-changes-target:
rules:
- changes:
- $CONTEXT/$APP_NAME/*
- $CONTEXT/*
build-mr-cic-eth:
build-cic-eth:
extends:
- .cic_eth_changes_target
- .py_build_merge_request
- .cic_eth_variables
build-push-cic-eth:
extends:
- .py_build_push
- .cic_eth_variables
- .contract-migration-changes-target
- .py-build
variables:
CONTEXT: apps/cic-eth

View File

@@ -9,8 +9,6 @@
* Add pure tcp and redis task api callbacks
* Add optional outgoing log status tracing
* Add lock lister and lock/unlock cli tool
* Add resend executable tool
* Add account create executable tool
- 0.9.0
* Require chain spec parameter in api
* Pass chain spec between tasks

View File

@@ -1 +0,0 @@
# CIC-ETH

View File

@@ -16,10 +16,7 @@ from cic_eth.db.models.role import AccountRole
from cic_eth.db.models.otx import Otx
from cic_eth.db.models.tx import TxCache
from cic_eth.db.models.nonce import Nonce
from cic_eth.db.enum import (
StatusEnum,
is_alive,
)
from cic_eth.db.enum import StatusEnum
from cic_eth.error import InitializationError
from cic_eth.db.error import TxStateChangeError
from cic_eth.eth.rpc import RpcClient
@@ -101,19 +98,6 @@ class AdminApi:
session.close()
def have_account(self, address_hex, chain_str):
s_have = celery.signature(
'cic_eth.eth.account.have',
[
address_hex,
chain_str,
],
queue=self.queue,
)
t = s_have.apply_async()
return t.get()
def resend(self, tx_hash_hex, chain_str, in_place=True, unlock=False):
logg.debug('resend {}'.format(tx_hash_hex))
s_get_tx_cache = celery.signature(
@@ -126,32 +110,24 @@ class AdminApi:
# TODO: This check should most likely be in resend task itself
tx_dict = s_get_tx_cache.apply_async().get()
#if tx_dict['status'] in [StatusEnum.REVERTED, StatusEnum.SUCCESS, StatusEnum.CANCELLED, StatusEnum.OBSOLETED]:
if not is_alive(getattr(StatusEnum, tx_dict['status']).value):
if tx_dict['status'] in [StatusEnum.REVERTED, StatusEnum.SUCCESS, StatusEnum.CANCELLED, StatusEnum.OBSOLETED]:
raise TxStateChangeError('Cannot resend mined or obsoleted transaction'.format(txold_hash_hex))
if not in_place:
s = None
if in_place:
s = celery.signature(
'cic_eth.eth.tx.resend_with_higher_gas',
[
tx_hash_hex,
chain_str,
None,
1.01,
],
queue=self.queue,
)
else:
raise NotImplementedError('resend as new not yet implemented')
s = celery.signature(
'cic_eth.eth.tx.resend_with_higher_gas',
[
chain_str,
None,
1.01,
],
queue=self.queue,
)
s_manual = celery.signature(
'cic_eth.queue.tx.set_manual',
[
tx_hash_hex,
],
queue=self.queue,
)
s_manual.link(s)
if unlock:
s_gas = celery.signature(
'cic_eth.admin.ctrl.unlock_send',
@@ -163,7 +139,7 @@ class AdminApi:
)
s.link(s_gas)
return s_manual.apply_async()
return s.apply_async()
def check_nonce(self, address):
s = celery.signature(

View File

@@ -1,26 +1,6 @@
# standard imports
import enum
@enum.unique
class StatusBits(enum.IntEnum):
QUEUED = 0x01
IN_NETWORK = 0x08
DEFERRED = 0x10
GAS_ISSUES = 0x20
LOCAL_ERROR = 0x100
NODE_ERROR = 0x200
NETWORK_ERROR = 0x400
UNKNOWN_ERROR = 0x800
FINAL = 0x1000
OBSOLETE = 0x2000
MANUAL = 0x8000
@enum.unique
class StatusEnum(enum.IntEnum):
"""
@@ -42,27 +22,21 @@ class StatusEnum(enum.IntEnum):
* SUCCESS: THe transaction was successfully mined. (Block number will be set)
"""
PENDING = 0
SENDFAIL = StatusBits.DEFERRED | StatusBits.LOCAL_ERROR
RETRY = StatusBits.QUEUED | StatusBits.DEFERRED
READYSEND = StatusBits.QUEUED
OBSOLETED = StatusBits.OBSOLETE | StatusBits.IN_NETWORK
WAITFORGAS = StatusBits.GAS_ISSUES
SENT = StatusBits.IN_NETWORK
FUBAR = StatusBits.FINAL | StatusBits.UNKNOWN_ERROR
CANCELLED = StatusBits.IN_NETWORK | StatusBits.FINAL | StatusBits.OBSOLETE
OVERRIDDEN = StatusBits.FINAL | StatusBits.OBSOLETE | StatusBits.MANUAL
REJECTED = StatusBits.NODE_ERROR | StatusBits.FINAL
REVERTED = StatusBits.IN_NETWORK | StatusBits.FINAL | StatusBits.NETWORK_ERROR
SUCCESS = StatusBits.IN_NETWORK | StatusBits.FINAL
PENDING=-9
SENDFAIL=-8
RETRY=-7
READYSEND=-6
OBSOLETED=-2
WAITFORGAS=-1
SENT=0
FUBAR=1
CANCELLED=2
OVERRIDDEN=3
REJECTED=7
REVERTED=8
SUCCESS=9
@enum.unique
class LockEnum(enum.IntEnum):
"""
STICKY: When set, reset is not possible
@@ -74,40 +48,4 @@ class LockEnum(enum.IntEnum):
CREATE=2
SEND=4
QUEUE=8
QUERY=16
ALL=int(0xfffffffffffffffe)
def status_str(v, bits_only=False):
s = ''
if not bits_only:
try:
s = StatusEnum(v).name
return s
except ValueError:
pass
for i in range(16):
b = (1 << i)
if (b & 0xffff) & v:
n = StatusBits(b).name
if len(s) > 0:
s += ','
s += n
if not bits_only:
s += '*'
return s
def all_errors():
return StatusBits.LOCAL_ERROR | StatusBits.NODE_ERROR | StatusBits.NETWORK_ERROR | StatusBits.UNKNOWN_ERROR
def is_error_status(v):
return bool(v & all_errors())
def is_alive(v):
return bool(v & (StatusBits.FINAL | StatusBits.OBSOLETE) == 0)

View File

@@ -1,14 +1,9 @@
# stanard imports
import logging
# third-party imports
from sqlalchemy import Column, Integer
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
logg = logging.getLogger()
Model = declarative_base(name='Model')
@@ -26,11 +21,7 @@ class SessionBase(Model):
transactional = True
"""Whether the database backend supports query transactions. Should be explicitly set by initialization code"""
poolable = True
"""Whether the database backend supports connection pools. Should be explicitly set by initialization code"""
procedural = True
"""Whether the database backend supports stored procedures"""
localsessions = {}
"""Contains dictionary of sessions initiated by db model components"""
"""Whether the database backend supports query transactions. Should be explicitly set by initialization code"""
@staticmethod
@@ -80,23 +71,3 @@ class SessionBase(Model):
"""
SessionBase.engine.dispose()
SessionBase.engine = None
@staticmethod
def bind_session(session=None):
localsession = session
if localsession == None:
localsession = SessionBase.create_session()
localsession_key = str(id(localsession))
logg.debug('creating new session {}'.format(localsession_key))
SessionBase.localsessions[localsession_key] = localsession
return localsession
@staticmethod
def release_session(session=None):
session_key = str(id(session))
if SessionBase.localsessions.get(session_key) != None:
logg.debug('destroying session {}'.format(session_key))
session.commit()
session.close()

View File

@@ -8,12 +8,7 @@ from sqlalchemy.ext.hybrid import hybrid_property, hybrid_method
# local imports
from .base import SessionBase
from cic_eth.db.enum import (
StatusEnum,
StatusBits,
status_str,
is_error_status,
)
from cic_eth.db.enum import StatusEnum
from cic_eth.db.error import TxStateChangeError
#from cic_eth.eth.util import address_hex_from_signed_tx
@@ -59,24 +54,21 @@ class Otx(SessionBase):
block = Column(Integer)
def __set_status(self, status, session):
self.status |= status
session.add(self)
session.flush()
def __set_status(self, status, session=None):
localsession = session
if localsession == None:
localsession = SessionBase.create_session()
self.status = status
localsession.add(self)
localsession.flush()
def __reset_status(self, status, session):
status_edit = ~status & self.status
self.status &= status_edit
session.add(self)
session.flush()
if self.tracing:
self.__state_log(session=localsession)
def __status_already_set(self, status):
r = bool(self.status & status)
if r:
logg.warning('status bit {} already set on {}'.format(status.name, self.tx_hash))
return r
if session==None:
localsession.commit()
localsession.close()
def set_block(self, block, session=None):
@@ -110,23 +102,9 @@ class Otx(SessionBase):
:raises cic_eth.db.error.TxStateChangeError: State change represents a sequence of events that should not exist.
"""
if self.__status_already_set(StatusBits.GAS_ISSUES):
return
session = SessionBase.bind_session(session)
if self.status & StatusBits.FINAL:
raise TxStateChangeError('GAS_ISSUES cannot be set on an entry with FINAL state set ({})'.format(status_str(self.status)))
if self.status & StatusBits.IN_NETWORK:
raise TxStateChangeError('GAS_ISSUES cannot be set on an entry with IN_NETWORK state set ({})'.format(status_str(self.status)))
self.__set_status(StatusBits.GAS_ISSUES, session)
self.__reset_status(StatusBits.QUEUED | StatusBits.DEFERRED, session)
if self.tracing:
self.__state_log(session=session)
SessionBase.release_session(session)
if self.status >= StatusEnum.SENT.value:
raise TxStateChangeError('WAITFORGAS cannot succeed final state, had {}'.format(StatusEnum(self.status).name))
self.__set_status(StatusEnum.WAITFORGAS, session)
def fubar(self, session=None):
@@ -134,89 +112,28 @@ class Otx(SessionBase):
Only manipulates object, does not transaction or commit to backend.
"""
if self.__status_already_set(StatusBits.UNKNOWN_ERROR):
return
session = SessionBase.bind_session(session)
if self.status & StatusBits.FINAL:
raise TxStateChangeError('FUBAR cannot be set on an entry with FINAL state set ({})'.format(status_str(self.status)))
if is_error_status(self.status):
raise TxStateChangeError('FUBAR cannot be set on an entry with an error state already set ({})'.format(status_str(self.status)))
self.__set_status(StatusBits.UNKNOWN_ERROR | StatusBits.FINAL, session)
self.__set_status(StatusEnum.FUBAR, session)
if self.tracing:
self.__state_log(session=session)
SessionBase.release_session(session)
def reject(self, session=None):
"""Marks transaction as "rejected," which means the node rejected sending the transaction to the network. The nonce has not been spent, and the transaction should be replaced.
Only manipulates object, does not transaction or commit to backend.
"""
if self.__status_already_set(StatusBits.NODE_ERROR):
return
session = SessionBase.bind_session(session)
if self.status & StatusBits.FINAL:
raise TxStateChangeError('REJECTED cannot be set on an entry with FINAL state set ({})'.format(status_str(self.status)))
if self.status & StatusBits.IN_NETWORK:
raise TxStateChangeError('REJECTED cannot be set on an entry already IN_NETWORK ({})'.format(status_str(self.status)))
if is_error_status(self.status):
raise TxStateChangeError('REJECTED cannot be set on an entry with an error state already set ({})'.format(status_str(self.status)))
self.__set_status(StatusBits.NODE_ERROR | StatusBits.FINAL, session)
if self.status >= StatusEnum.SENT.value:
raise TxStateChangeError('REJECTED cannot succeed SENT or final state, had {}'.format(StatusEnum(self.status).name))
self.__set_status(StatusEnum.REJECTED, session)
if self.tracing:
self.__state_log(session=session)
SessionBase.release_session(session)
def override(self, manual=False, session=None):
def override(self, session=None):
"""Marks transaction as manually overridden.
Only manipulates object, does not transaction or commit to backend.
"""
if self.status >= StatusEnum.SENT.value:
raise TxStateChangeError('OVERRIDDEN cannot succeed SENT or final state, had {}'.format(StatusEnum(self.status).name))
self.__set_status(StatusEnum.OVERRIDDEN, session)
session = SessionBase.bind_session(session)
if self.status & StatusBits.FINAL:
raise TxStateChangeError('OVERRIDDEN/OBSOLETED cannot be set on an entry with FINAL state set ({})'.format(status_str(self.status)))
if self.status & StatusBits.IN_NETWORK:
raise TxStateChangeError('OVERRIDDEN/OBSOLETED cannot be set on an entry already IN_NETWORK ({})'.format(status_str(self.status)))
if self.status & StatusBits.OBSOLETE:
raise TxStateChangeError('OVERRIDDEN/OBSOLETED cannot be set on an entry already OBSOLETE ({})'.format(status_str(self.status)))
self.__set_status(StatusBits.OBSOLETE, session)
#if manual:
# self.__set_status(StatusBits.MANUAL, session)
self.__reset_status(StatusBits.QUEUED | StatusBits.IN_NETWORK, session)
if self.tracing:
self.__state_log(session=session)
SessionBase.release_session(session)
def manual(self, session=None):
session = SessionBase.bind_session(session)
if self.status & StatusBits.FINAL:
raise TxStateChangeError('OVERRIDDEN/OBSOLETED cannot be set on an entry with FINAL state set ({})'.format(status_str(self.status)))
self.__set_status(StatusBits.MANUAL, session)
if self.tracing:
self.__state_log(session=session)
SessionBase.release_session(session)
def retry(self, session=None):
"""Marks transaction as ready to retry after a timeout following a sendfail or a completed gas funding.
@@ -225,23 +142,9 @@ class Otx(SessionBase):
:raises cic_eth.db.error.TxStateChangeError: State change represents a sequence of events that should not exist.
"""
if self.__status_already_set(StatusBits.QUEUED):
return
session = SessionBase.bind_session(session)
if self.status & StatusBits.FINAL:
raise TxStateChangeError('RETRY cannot be set on an entry with FINAL state set ({})'.format(status_str(self.status)))
if not is_error_status(self.status) and not StatusBits.IN_NETWORK & self.status > 0:
raise TxStateChangeError('RETRY cannot be set on an entry that has no error ({})'.format(status_str(self.status)))
self.__set_status(StatusBits.QUEUED, session)
self.__reset_status(StatusBits.GAS_ISSUES, session)
if self.tracing:
self.__state_log(session=session)
SessionBase.release_session(session)
if self.status != StatusEnum.SENT.value and self.status != StatusEnum.SENDFAIL.value:
raise TxStateChangeError('RETRY must follow SENT or SENDFAIL, but had {}'.format(StatusEnum(self.status).name))
self.__set_status(StatusEnum.RETRY, session)
def readysend(self, session=None):
@@ -251,23 +154,9 @@ class Otx(SessionBase):
:raises cic_eth.db.error.TxStateChangeError: State change represents a sequence of events that should not exist.
"""
if self.__status_already_set(StatusBits.QUEUED):
return
session = SessionBase.bind_session(session)
if self.status & StatusBits.FINAL:
raise TxStateChangeError('READYSEND cannot be set on an entry with FINAL state set ({})'.format(status_str(self.status)))
if is_error_status(self.status):
raise TxStateChangeError('READYSEND cannot be set on an errored state ({})'.format(status_str(self.status)))
self.__set_status(StatusBits.QUEUED, session)
self.__reset_status(StatusBits.GAS_ISSUES, session)
if self.tracing:
self.__state_log(session=session)
SessionBase.release_session(session)
if self.status != StatusEnum.PENDING.value and self.status != StatusEnum.WAITFORGAS.value:
raise TxStateChangeError('READYSEND must follow PENDING or WAITFORGAS, but had {}'.format(StatusEnum(self.status).name))
self.__set_status(StatusEnum.READYSEND, session)
def sent(self, session=None):
@@ -277,22 +166,9 @@ class Otx(SessionBase):
:raises cic_eth.db.error.TxStateChangeError: State change represents a sequence of events that should not exist.
"""
if self.__status_already_set(StatusBits.IN_NETWORK):
return
session = SessionBase.bind_session(session)
if self.status & StatusBits.FINAL:
raise TxStateChangeError('SENT cannot be set on an entry with FINAL state set ({})'.format(status_str(self.status)))
self.__set_status(StatusBits.IN_NETWORK, session)
self.__reset_status(StatusBits.DEFERRED | StatusBits.QUEUED | StatusBits.LOCAL_ERROR | StatusBits.NODE_ERROR, session)
logg.debug('<<< status {}'.format(status_str(self.status)))
if self.tracing:
self.__state_log(session=session)
SessionBase.release_session(session)
if self.status > StatusEnum.SENT:
raise TxStateChangeError('SENT after {}'.format(StatusEnum(self.status).name))
self.__set_status(StatusEnum.SENT, session)
def sendfail(self, session=None):
@@ -302,23 +178,9 @@ class Otx(SessionBase):
:raises cic_eth.db.error.TxStateChangeError: State change represents a sequence of events that should not exist.
"""
if self.__status_already_set(StatusBits.NODE_ERROR):
return
session = SessionBase.bind_session(session)
if self.status & StatusBits.FINAL:
raise TxStateChangeError('SENDFAIL cannot be set on an entry with FINAL state set ({})'.format(status_str(self.status)))
if self.status & StatusBits.IN_NETWORK:
raise TxStateChangeError('SENDFAIL cannot be set on an entry with IN_NETWORK state set ({})'.format(status_str(self.status)))
self.__set_status(StatusBits.LOCAL_ERROR | StatusBits.DEFERRED, session)
self.__reset_status(StatusBits.QUEUED | StatusBits.GAS_ISSUES, session)
if self.tracing:
self.__state_log(session=session)
SessionBase.release_session(session)
if self.status not in [StatusEnum.PENDING, StatusEnum.SENT, StatusEnum.WAITFORGAS]:
raise TxStateChangeError('SENDFAIL must follow SENT or PENDING, but had {}'.format(StatusEnum(self.status).name))
self.__set_status(StatusEnum.SENDFAIL, session)
def minefail(self, block, session=None):
@@ -330,25 +192,14 @@ class Otx(SessionBase):
:type block: number
:raises cic_eth.db.error.TxStateChangeError: State change represents a sequence of events that should not exist.
"""
if self.__status_already_set(StatusBits.NETWORK_ERROR):
return
session = SessionBase.bind_session(session)
if self.status & StatusBits.FINAL:
raise TxStateChangeError('REVERTED cannot be set on an entry with FINAL state set ({})'.format(status_str(self.status)))
if not self.status & StatusBits.IN_NETWORK:
raise TxStateChangeError('REVERTED cannot be set on an entry without IN_NETWORK state set ({})'.format(status_str(self.status)))
if block != None:
self.block = block
self.__set_status(StatusBits.NETWORK_ERROR | StatusBits.FINAL, session)
if self.tracing:
self.__state_log(session=session)
SessionBase.release_session(session)
if self.status != StatusEnum.SENT:
logg.warning('REVERTED should follow SENT, but had {}'.format(StatusEnum(self.status).name))
#if self.status != StatusEnum.PENDING and self.status != StatusEnum.OBSOLETED and self.status != StatusEnum.SENT:
#if self.status > StatusEnum.SENT:
# raise TxStateChangeError('REVERTED must follow OBSOLETED, PENDING or SENT, but had {}'.format(StatusEnum(self.status).name))
self.__set_status(StatusEnum.REVERTED, session)
def cancel(self, confirmed=False, session=None):
@@ -362,36 +213,18 @@ class Otx(SessionBase):
:type confirmed: bool
:raises cic_eth.db.error.TxStateChangeError: State change represents a sequence of events that should not exist.
"""
session = SessionBase.bind_session(session)
if self.status & StatusBits.FINAL:
raise TxStateChangeError('CANCEL cannot be set on an entry with FINAL state set ({})'.format(status_str(self.status)))
if confirmed:
if not self.status & StatusBits.OBSOLETE:
raise TxStateChangeError('CANCEL can only be set on an entry marked OBSOLETE ({})'.format(status_str(self.status)))
if self.status != StatusEnum.OBSOLETED:
logg.warning('CANCELLED must follow OBSOLETED, but had {}'.format(StatusEnum(self.status).name))
#raise TxStateChangeError('CANCELLED must follow OBSOLETED, but had {}'.format(StatusEnum(self.status).name))
self.__set_status(StatusEnum.CANCELLED, session)
else:
elif self.status != StatusEnum.OBSOLETED:
if self.status > StatusEnum.SENT:
logg.warning('OBSOLETED must follow PENDING, SENDFAIL or SENT, but had {}'.format(StatusEnum(self.status).name))
#raise TxStateChangeError('OBSOLETED must follow PENDING, SENDFAIL or SENT, but had {}'.format(StatusEnum(self.status).name))
self.__set_status(StatusEnum.OBSOLETED, session)
# if confirmed:
# if self.status != StatusEnum.OBSOLETED:
# logg.warning('CANCELLED must follow OBSOLETED, but had {}'.format(StatusEnum(self.status).name))
# #raise TxStateChangeError('CANCELLED must follow OBSOLETED, but had {}'.format(StatusEnum(self.status).name))
# self.__set_status(StatusEnum.CANCELLED, session)
# elif self.status != StatusEnum.OBSOLETED:
# if self.status > StatusEnum.SENT:
# logg.warning('OBSOLETED must follow PENDING, SENDFAIL or SENT, but had {}'.format(StatusEnum(self.status).name))
# #raise TxStateChangeError('OBSOLETED must follow PENDING, SENDFAIL or SENT, but had {}'.format(StatusEnum(self.status).name))
# self.__set_status(StatusEnum.OBSOLETED, session)
if self.tracing:
self.__state_log(session=session)
SessionBase.release_session(session)
def success(self, block, session=None):
"""Marks that transaction was successfully mined.
@@ -402,24 +235,13 @@ class Otx(SessionBase):
:raises cic_eth.db.error.TxStateChangeError: State change represents a sequence of events that should not exist.
"""
session = SessionBase.bind_session(session)
if self.status & StatusBits.FINAL:
raise TxStateChangeError('SUCCESS cannot be set on an entry with FINAL state set ({})'.format(status_str(self.status)))
if not self.status & StatusBits.IN_NETWORK:
raise TxStateChangeError('SUCCESS cannot be set on an entry without IN_NETWORK state set ({})'.format(status_str(self.status)))
if is_error_status(self.status):
raise TxStateChangeError('SUCCESS cannot be set on an entry with error state set ({})'.format(status_str(self.status)))
if block != None:
self.block = block
if self.status != StatusEnum.SENT:
logg.error('SUCCESS should follow SENT, but had {}'.format(StatusEnum(self.status).name))
#raise TxStateChangeError('SUCCESS must follow SENT, but had {}'.format(StatusEnum(self.status).name))
self.__set_status(StatusEnum.SUCCESS, session)
if self.tracing:
self.__state_log(session=session)
SessionBase.release_session(session)
@staticmethod
def get(status=0, limit=4096, status_exact=True):
@@ -628,3 +450,6 @@ class OtxSync(SessionBase):
self.tx_height_session = 0
self.block_height_backlog = 0
self.tx_height_backlog = 0

View File

@@ -0,0 +1,194 @@
# standard imports
import logging
# third-party imports
import web3
import celery
from erc20_approval_escrow import TransferApproval
from cic_registry import CICRegistry
from cic_registry.chain import ChainSpec
# local imports
from cic_eth.db.models.tx import TxCache
from cic_eth.db.models.base import SessionBase
from cic_eth.eth import RpcClient
from cic_eth.eth.factory import TxFactory
from cic_eth.eth.task import sign_and_register_tx
from cic_eth.eth.util import unpack_signed_raw_tx
from cic_eth.eth.task import create_check_gas_and_send_task
from cic_eth.error import TokenCountError
celery_app = celery.current_app
logg = logging.getLogger()
contract_function_signatures = {
'request': 'b0addede',
}
class TransferRequestTxFactory(TxFactory):
"""Factory for creating Transfer request transactions using the TransferApproval contract backend
"""
def request(
self,
token_address,
beneficiary_address,
amount,
chain_spec,
):
"""Create a new TransferApproval.request transaction
:param token_address: Token to create transfer request for
:type token_address: str, 0x-hex
:param beneficiary_address: Beneficiary of token transfer
:type beneficiary_address: str, 0x-hex
:param amount: Amount of tokens to transfer
:type amount: number
:param chain_spec: Chain spec
:type chain_spec: cic_registry.chain.ChainSpec
:returns: Transaction in standard Ethereum format
:rtype: dict
"""
transfer_approval = CICRegistry.get_contract(chain_spec, 'TransferApproval', 'TransferAuthorization')
fn = transfer_approval.function('createRequest')
tx_approval_buildable = fn(beneficiary_address, token_address, amount)
transfer_approval_gas = transfer_approval.gas('createRequest')
tx_approval = tx_approval_buildable.buildTransaction({
'from': self.address,
'gas': transfer_approval_gas,
'gasPrice': self.gas_price,
'chainId': chain_spec.chain_id(),
'nonce': self.next_nonce(),
})
return tx_approval
def unpack_transfer_approval_request(data):
"""Verifies that a transaction is an "TransferApproval.request" transaction, and extracts call parameters from it.
:param data: Raw input data from Ethereum transaction.
:type data: str, 0x-hex
:raises ValueError: Function signature does not match AccountRegister.add
:returns: Parsed parameters
:rtype: dict
"""
f = data[2:10]
if f != contract_function_signatures['request']:
raise ValueError('Invalid transfer request data ({})'.format(f))
d = data[10:]
return {
'to': web3.Web3.toChecksumAddress('0x' + d[64-40:64]),
'token': web3.Web3.toChecksumAddress('0x' + d[128-40:128]),
'amount': int(d[128:], 16)
}
@celery_app.task(bind=True)
def transfer_approval_request(self, tokens, holder_address, receiver_address, value, chain_str):
"""Creates a new transfer approval
:param tokens: Token to generate transfer request for
:type tokens: list with single token spec as dict
:param holder_address: Address to generate transfer on behalf of
:type holder_address: str, 0x-hex
:param receiver_address: Address to transfser tokens to
:type receiver_address: str, 0x-hex
:param value: Amount of tokens to transfer
:type value: number
:param chain_spec: Chain spec string representation
:type chain_spec: str
:raises cic_eth.error.TokenCountError: More than one token in tokens argument
:returns: Raw signed transaction
:rtype: list with transaction as only element
"""
if len(tokens) != 1:
raise TokenCountError
chain_spec = ChainSpec.from_chain_str(chain_str)
queue = self.request.delivery_info['routing_key']
t = tokens[0]
c = RpcClient(holder_address)
txf = TransferRequestTxFactory(holder_address, c)
tx_transfer = txf.request(t['address'], receiver_address, value, chain_spec)
(tx_hash_hex, tx_signed_raw_hex) = sign_and_register_tx(tx_transfer, chain_str, queue, 'cic_eth.eth.request.otx_cache_transfer_approval_request')
gas_budget = tx_transfer['gas'] * tx_transfer['gasPrice']
s = create_check_gas_and_send_task(
[tx_signed_raw_hex],
chain_str,
holder_address,
gas_budget,
[tx_hash_hex],
queue,
)
s.apply_async()
return [tx_signed_raw_hex]
@celery_app.task()
def otx_cache_transfer_approval_request(
tx_hash_hex,
tx_signed_raw_hex,
chain_str,
):
"""Generates and commits transaction cache metadata for an TransferApproval.request transaction
:param tx_hash_hex: Transaction hash
:type tx_hash_hex: str, 0x-hex
:param tx_signed_raw_hex: Raw signed transaction
:type tx_signed_raw_hex: str, 0x-hex
:param chain_str: Chain spec string representation
:type chain_str: str
:returns: Transaction hash and id of cache element in storage backend, respectively
:rtype: tuple
"""
chain_spec = ChainSpec.from_chain_str(chain_str)
tx_signed_raw_bytes = bytes.fromhex(tx_signed_raw_hex[2:])
tx = unpack_signed_raw_tx(tx_signed_raw_bytes, chain_spec.chain_id())
logg.debug('in otx acche transfer approval request')
(txc, cache_id) = cache_transfer_approval_request_data(tx_hash_hex, tx)
return txc
@celery_app.task()
def cache_transfer_approval_request_data(
tx_hash_hex,
tx,
):
"""Helper function for otx_cache_transfer_approval_request
:param tx_hash_hex: Transaction hash
:type tx_hash_hex: str, 0x-hex
:param tx: Signed raw transaction
:type tx: str, 0x-hex
:returns: Transaction hash and id of cache element in storage backend, respectively
:rtype: tuple
"""
tx_data = unpack_transfer_approval_request(tx['data'])
logg.debug('tx approval request data {}'.format(tx_data))
logg.debug('tx approval request {}'.format(tx))
session = SessionBase.create_session()
tx_cache = TxCache(
tx_hash_hex,
tx['from'],
tx_data['to'],
tx_data['token'],
tx_data['token'],
tx_data['amount'],
tx_data['amount'],
)
session.add(tx_cache)
session.commit()
cache_id = tx_cache.id
session.close()
return (tx_hash_hex, cache_id)

View File

@@ -13,10 +13,7 @@ from .rpc import RpcClient
from cic_eth.db import Otx, SessionBase
from cic_eth.db.models.tx import TxCache
from cic_eth.db.models.lock import Lock
from cic_eth.db.enum import (
LockEnum,
StatusBits,
)
from cic_eth.db.enum import LockEnum
from cic_eth.error import PermanentTxError
from cic_eth.error import TemporaryTxError
from cic_eth.error import NotLocalTxError
@@ -402,10 +399,9 @@ def refill_gas(self, recipient_address, chain_str):
chain_spec = ChainSpec.from_chain_str(chain_str)
session = SessionBase.create_session()
status_filter = StatusBits.FINAL | StatusBits.NODE_ERROR | StatusBits.NETWORK_ERROR | StatusBits.UNKNOWN_ERROR
q = session.query(Otx.tx_hash)
q = q.join(TxCache)
q = q.filter(Otx.status.op('&')(StatusBits.FINAL.value)==0)
q = q.filter(Otx.status<=0)
q = q.filter(TxCache.from_value!='0x00')
q = q.filter(TxCache.recipient==recipient_address)
c = q.count()
@@ -499,7 +495,7 @@ def resend_with_higher_gas(self, txold_hash_hex, chain_str, gas=None, default_fa
tx_signed_raw_bytes = bytes.fromhex(otx.signed_tx[2:])
tx = unpack_signed_raw_tx(tx_signed_raw_bytes, chain_spec.chain_id())
logg.debug('resend otx {} {}'.format(tx, otx.signed_tx))
logg.debug('otx {} {}'.format(tx, otx.signed_tx))
queue = self.request.delivery_info['routing_key']

View File

@@ -35,10 +35,6 @@ def unpack_signed_raw_tx(tx_raw_bytes, chain_id):
if chain_id != 0:
v = int.from_bytes(d[6], 'big')
vb = v - (chain_id * 2) - 35
while len(d[7]) < 32:
d[7] = b'\x00' + d[7]
while len(d[8]) < 32:
d[8] = b'\x00' + d[8]
s = b''.join([d[7], d[8], bytes([vb])])
so = KeyAPI.Signature(signature_bytes=s)

View File

@@ -6,7 +6,6 @@ import datetime
# third-party imports
import celery
from sqlalchemy import or_
from sqlalchemy import not_
from sqlalchemy import tuple_
from sqlalchemy import func
@@ -17,12 +16,8 @@ from cic_eth.db.models.otx import OtxStateLog
from cic_eth.db.models.tx import TxCache
from cic_eth.db.models.lock import Lock
from cic_eth.db import SessionBase
from cic_eth.db.enum import (
StatusEnum,
LockEnum,
StatusBits,
is_alive,
)
from cic_eth.db.enum import StatusEnum
from cic_eth.db.enum import LockEnum
from cic_eth.eth.util import unpack_signed_raw_tx # TODO: should not be in same sub-path as package that imports queue.tx
from cic_eth.error import NotLocalTxError
from cic_eth.error import LockedError
@@ -75,7 +70,10 @@ def create(nonce, holder_address, tx_hash, signed_tx, chain_str, obsolete_predec
for otx in q.all():
logg.info('otx {} obsoleted by {}'.format(otx.tx_hash, tx_hash))
otx.cancel(confirmed=False, session=session)
if otx.status == StatusEnum.SENT:
otx.cancel(False, session=session)
elif otx.status != StatusEnum.OBSOLETED:
otx.override(session=session)
session.commit()
session.close()
@@ -169,7 +167,6 @@ def set_final_status(tx_hash, block=None, fail=False):
return tx_hash
@celery_app.task()
def set_cancel(tx_hash, manual=False):
"""Used to set the status when a transaction is cancelled.
@@ -253,33 +250,6 @@ def set_fubar(tx_hash):
return tx_hash
@celery_app.task()
def set_manual(tx_hash):
"""Used to set the status when queue is manually changed
Will set the state to MANUAL
:param tx_hash: Transaction hash of record to modify
:type tx_hash: str, 0x-hex
:raises NotLocalTxError: If transaction not found in queue.
"""
session = SessionBase.create_session()
o = session.query(Otx).filter(Otx.tx_hash==tx_hash).first()
if o == None:
session.close()
raise NotLocalTxError('queue does not contain tx hash {}'.format(tx_hash))
session.flush()
o.manual(session=session)
session.commit()
session.close()
return tx_hash
@celery_app.task()
def set_ready(tx_hash):
"""Used to mark a transaction as ready to be sent to network
@@ -295,11 +265,14 @@ def set_ready(tx_hash):
raise NotLocalTxError('queue does not contain tx hash {}'.format(tx_hash))
session.flush()
if o.status & StatusBits.GAS_ISSUES or o.status == StatusEnum.PENDING:
if o.status == StatusEnum.WAITFORGAS or o.status == StatusEnum.PENDING:
o.readysend(session=session)
else:
o.retry(session=session)
logg.debug('ot otx otx {} {}'.format(tx_hash, o))
session.add(o)
session.commit()
session.close()
@@ -331,7 +304,6 @@ def set_waitforgas(tx_hash):
return tx_hash
@celery_app.task()
def get_state_log(tx_hash):
@@ -511,14 +483,13 @@ def get_paused_txs(status=None, sender=None, chain_id=0):
q = session.query(Otx)
if status != None:
#if status == StatusEnum.PENDING or status >= StatusEnum.SENT:
if status == StatusEnum.PENDING or status & StatusBits.IN_NETWORK or not is_alive(status):
if status == StatusEnum.PENDING or status >= StatusEnum.SENT:
raise ValueError('not a valid paused tx value: {}'.format(status))
q = q.filter(Otx.status.op('&')(status.value)==status.value)
q = q.filter(Otx.status==status)
q = q.join(TxCache)
else:
q = q.filter(Otx.status>StatusEnum.PENDING.value)
q = q.filter(not_(Otx.status.op('&')(StatusBits.IN_NETWORK.value)>0))
q = q.filter(Otx.status>StatusEnum.PENDING)
q = q.filter(Otx.status<StatusEnum.SENT)
if sender != None:
q = q.filter(TxCache.sender==sender)
@@ -537,7 +508,7 @@ def get_paused_txs(status=None, sender=None, chain_id=0):
return txs
def get_status_tx(status, before=None, exact=False, limit=0):
def get_status_tx(status, before=None, limit=0):
"""Retrieve transaction with a specific queue status.
:param status: Status to match transactions with
@@ -554,10 +525,7 @@ def get_status_tx(status, before=None, exact=False, limit=0):
q = session.query(Otx)
q = q.join(TxCache)
q = q.filter(TxCache.date_updated<before)
if exact:
q = q.filter(Otx.status==status.value)
else:
q = q.filter(Otx.status.op('&')(status.value)==status.value)
q = q.filter(Otx.status==status)
i = 0
for o in q.all():
if limit > 0 and i == limit:
@@ -589,29 +557,27 @@ def get_upcoming_tx(status=StatusEnum.READYSEND, recipient=None, before=None, ch
:rtype: dict, with transaction hash as key, signed raw transaction as value
"""
session = SessionBase.create_session()
q_outer = session.query(
q = session.query(
TxCache.sender,
func.min(Otx.nonce).label('nonce'),
)
q_outer = q_outer.join(TxCache)
q_outer = q_outer.join(Lock, isouter=True)
q_outer = q_outer.filter(or_(Lock.flags==None, Lock.flags.op('&')(LockEnum.SEND.value)==0))
q = q.join(TxCache)
q = q.join(Lock, isouter=True)
q = q.filter(or_(Lock.flags==None, Lock.flags.op('&')(LockEnum.SEND.value)==0))
if not is_alive(status):
raise ValueError('not a valid non-final tx value: {}'.format(status))
if status == StatusEnum.PENDING:
q_outer = q_outer.filter(Otx.status==status.value)
else:
q_outer = q_outer.filter(Otx.status.op('&')(status.value)==status.value)
if status >= StatusEnum.SENT:
raise ValueError('not a valid non-final tx value: {}'.format(s))
q = q.filter(Otx.status==status)
if recipient != None:
q_outer = q_outer.filter(TxCache.recipient==recipient)
q = q.filter(TxCache.recipient==recipient)
q_outer = q_outer.group_by(TxCache.sender)
q = q.group_by(TxCache.sender)
txs = {}
for r in q_outer.all():
results = q.all()
for r in results:
q = session.query(Otx)
q = q.join(TxCache)
q = q.filter(TxCache.sender==r.sender)
@@ -621,6 +587,7 @@ def get_upcoming_tx(status=StatusEnum.READYSEND, recipient=None, before=None, ch
q = q.filter(TxCache.date_checked<before)
q = q.order_by(TxCache.date_created.desc())
o = q.first()
# TODO: audit; should this be possible if a row is found in the initial query? If not, at a minimum log error.
@@ -635,6 +602,7 @@ def get_upcoming_tx(status=StatusEnum.READYSEND, recipient=None, before=None, ch
q = q.filter(TxCache.otx_id==o.id)
o = q.first()
logg.debug('oooo {}'.format(o))
o.date_checked = datetime.datetime.now()
session.add(o)
session.commit()

View File

@@ -1,84 +0,0 @@
#!/usr/bin/python
#import socket
import sys
import os
import logging
import uuid
import json
import celery
from cic_eth.api import Api
import confini
import argparse
import redis
logging.basicConfig(level=logging.WARNING)
logg = logging.getLogger('create_account_script')
logging.getLogger('confini').setLevel(logging.WARNING)
logging.getLogger('gnupg').setLevel(logging.WARNING)
default_config_dir = os.environ.get('CONFINI_DIR', '/usr/local/etc/cic')
argparser = argparse.ArgumentParser()
argparser.add_argument('--no-register', dest='no_register', action='store_true', help='Do not register new account in on-chain accounts index')
argparser.add_argument('-c', type=str, default=default_config_dir, help='config file')
argparser.add_argument('-i', '--chain-spec', dest='i', type=str, help='chain spec')
argparser.add_argument('--redis-host', dest='redis_host', type=str, help='redis host to use for task submission')
argparser.add_argument('--redis-port', dest='redis_port', type=int, help='redis host to use for task submission')
argparser.add_argument('--redis-db', dest='redis_db', type=int, help='redis db to use for task submission and callback')
argparser.add_argument('--redis-host-callback', dest='redis_host_callback', default='localhost', type=str, help='redis host to use for callback')
argparser.add_argument('--redis-port-callback', dest='redis_port_callback', default=6379, type=int, help='redis port to use for callback')
argparser.add_argument('--timeout', default=20.0, type=float, help='Callback timeout')
argparser.add_argument('-q', type=str, default='cic-eth', help='Task queue')
argparser.add_argument('-v', action='store_true', help='Be verbose')
argparser.add_argument('-vv', action='store_true', help='Be more verbose')
args = argparser.parse_args()
if args.vv:
logg.setLevel(logging.DEBUG)
if args.v:
logg.setLevel(logging.INFO)
config_dir = args.c
config = confini.Config(config_dir, os.environ.get('CONFINI_ENV_PREFIX'))
config.process()
args_override = {
'CIC_CHAIN_SPEC': getattr(args, 'i'),
'REDIS_HOST': getattr(args, 'redis_host'),
'REDIS_PORT': getattr(args, 'redis_port'),
'REDIS_DB': getattr(args, 'redis_db'),
}
config.dict_override(args_override, 'cli')
celery_app = celery.Celery(broker=config.get('CELERY_BROKER_URL'), backend=config.get('CELERY_RESULT_URL'))
def main():
redis_host = config.get('REDIS_HOST')
redis_port = config.get('REDIS_PORT')
redis_db = config.get('REDIS_DB')
redis_channel = str(uuid.uuid4())
r = redis.Redis(redis_host, redis_port, redis_db)
ps = r.pubsub()
ps.subscribe(redis_channel)
ps.get_message()
api = Api(
config.get('CIC_CHAIN_SPEC'),
queue=args.q,
callback_param='{}:{}:{}:{}'.format(args.redis_host_callback, args.redis_port_callback, redis_db, redis_channel),
callback_task='cic_eth.callbacks.redis.redis',
callback_queue=args.q,
)
register = not args.no_register
logg.debug('register {}'.format(register))
t = api.create_account(register=register)
ps.get_message()
m = ps.get_message(timeout=args.timeout)
print(json.loads(m['data']))
if __name__ == '__main__':
main()

View File

@@ -68,7 +68,7 @@ app = celery.Celery(backend=config.get('CELERY_RESULT_URL'), broker=config.get(
queue = args.q
dsn = dsn_from_config(config)
SessionBase.connect(dsn, debug=config.true('DATABASE_DEBUG'))
SessionBase.connect(dsn)
re_websocket = re.compile('^wss?://')

View File

@@ -1,95 +0,0 @@
# standard imports
import logging
import argparse
import re
import os
# third-party imports
import celery
import confini
import web3
from cic_registry import CICRegistry
from cic_registry.chain import ChainSpec
from cic_registry.chain import ChainRegistry
# local imports
from cic_eth.eth.rpc import RpcClient
from cic_eth.api.api_admin import AdminApi
logging.basicConfig(level=logging.WARNING)
logg = logging.getLogger()
logging.getLogger('web3').setLevel(logging.WARNING)
logging.getLogger('urllib3').setLevel(logging.WARNING)
default_config_dir = os.path.join('/usr/local/etc/cic-eth')
argparser = argparse.ArgumentParser()
argparser.add_argument('-c', type=str, default=default_config_dir, help='config root to use')
argparser.add_argument('-p', '--provider', dest='p', default='http://localhost:8545', type=str, help='Web3 provider url (http only)')
argparser.add_argument('-i', '--chain-spec', dest='i', type=str, default='Ethereum:1', help='Chain specification string')
argparser.add_argument('--unlock', action='store_true', help='Append task to unlock account')
argparser.add_argument('--env-prefix', default=os.environ.get('CONFINI_ENV_PREFIX'), dest='env_prefix', type=str, help='environment prefix for variables to overwrite configuration')
argparser.add_argument('-v', action='store_true', help='Be verbose')
argparser.add_argument('-vv', action='store_true', help='Be more verbose')
argparser.add_argument('tx_hash', type=str, help='Transaction hash')
args = argparser.parse_args()
if args.vv:
logg.setLevel(logging.DEBUG)
elif args.v:
logg.setLevel(logging.INFO)
config_dir = os.path.join(args.c)
os.makedirs(config_dir, 0o777, True)
config = confini.Config(config_dir, args.env_prefix)
config.process()
args_override = {
'ETH_PROVIDER': getattr(args, 'p'),
'CIC_CHAIN_SPEC': getattr(args, 'i'),
}
# override args
config.censor('PASSWORD', 'DATABASE')
config.censor('PASSWORD', 'SSL')
logg.debug('config loaded from {}:\n{}'.format(config_dir, config))
chain_spec = ChainSpec.from_chain_str(args.i)
chain_str = str(chain_spec)
re_websocket = re.compile('^wss?://')
re_http = re.compile('^https?://')
blockchain_provider = config.get('ETH_PROVIDER')
if re.match(re_websocket, blockchain_provider) != None:
blockchain_provider = web3.Web3.WebsocketProvider(blockchain_provider)
elif re.match(re_http, blockchain_provider) != None:
blockchain_provider = web3.Web3.HTTPProvider(blockchain_provider)
else:
raise ValueError('unknown provider url {}'.format(blockchain_provider))
def web3_constructor():
w3 = web3.Web3(blockchain_provider)
return (blockchain_provider, w3)
RpcClient.set_constructor(web3_constructor)
celery_app = celery.Celery(broker=config.get('CELERY_BROKER_URL'), backend=config.get('CELERY_RESULT_URL'))
c = RpcClient(chain_spec)
CICRegistry.init(c.w3, config.get('CIC_REGISTRY_ADDRESS'), chain_spec)
chain_registry = ChainRegistry(chain_spec)
CICRegistry.add_chain_registry(chain_registry)
CICRegistry.add_path(config.get('ETH_ABI_DIR'))
CICRegistry.load_for(chain_spec)
def main():
api = AdminApi(c)
tx_details = api.tx(chain_spec, args.tx_hash)
t = api.resend(args.tx_hash, chain_str, unlock=True)
if __name__ == '__main__':
main()

View File

@@ -55,25 +55,62 @@ SessionBase.connect(dsn)
celery_app = celery.Celery(backend=config.get('CELERY_RESULT_URL'), broker=config.get('CELERY_BROKER_URL'))
queue = args.q
re_something = r'^/something/?'
re_transfer_approval_request = r'^/transferrequest/?'
chain_spec = ChainSpec.from_chain_str(config.get('CIC_CHAIN_SPEC'))
def process_something(session, env):
r = re.match(re_something, env.get('PATH_INFO'))
def process_transfer_approval_request(session, env):
r = re.match(re_transfer_approval_request, env.get('PATH_INFO'))
if not r:
return None
#if env.get('CONTENT_TYPE') != 'application/json':
# raise AttributeError('content type')
if env.get('CONTENT_TYPE') != 'application/json':
raise AttributeError('content type')
#if env.get('REQUEST_METHOD') != 'POST':
# raise AttributeError('method')
if env.get('REQUEST_METHOD') != 'POST':
raise AttributeError('method')
#post_data = json.load(env.get('wsgi.input'))
#return ('text/plain', 'foo'.encode('utf-8'),)
post_data = json.load(env.get('wsgi.input'))
token_address = web3.Web3.toChecksumAddress(post_data['token_address'])
holder_address = web3.Web3.toChecksumAddress(post_data['holder_address'])
beneficiary_address = web3.Web3.toChecksumAddress(post_data['beneficiary_address'])
value = int(post_data['value'])
logg.debug('transfer approval request token {} to {} from {} value {}'.format(
token_address,
beneficiary_address,
holder_address,
value,
)
)
s = celery.signature(
'cic_eth.eth.request.transfer_approval_request',
[
[
{
'address': token_address,
},
],
holder_address,
beneficiary_address,
value,
config.get('CIC_CHAIN_SPEC'),
],
queue=queue,
)
t = s.apply_async()
r = t.get()
tx_raw_bytes = bytes.fromhex(r[0][2:])
tx = unpack_signed_raw_tx(tx_raw_bytes, chain_spec.chain_id())
for r in t.collect():
logg.debug('result {}'.format(r))
if not t.successful():
raise RuntimeError(tx['hash'])
return ('text/plain', tx['hash'].encode('utf-8'),)
# uwsgi application
@@ -88,7 +125,7 @@ def application(env, start_response):
session = SessionBase.create_session()
for handler in [
process_something,
process_transfer_approval_request,
]:
try:
r = handler(session, env)

View File

@@ -211,11 +211,6 @@ def main():
chain_registry = ChainRegistry(chain_spec)
CICRegistry.add_chain_registry(chain_registry, True)
try:
CICRegistry.get_contract(chain_spec, 'CICRegistry')
except Exception as e:
logg.exception('Eek, registry failure is baaad juju {}'.format(e))
sys.exit(1)
if config.get('ETH_ACCOUNT_ACCOUNTS_INDEX_WRITER') != None:
CICRegistry.add_role(chain_spec, config.get('ETH_ACCOUNT_ACCOUNTS_INDEX_WRITER'), 'AccountRegistry', True)

View File

@@ -10,7 +10,7 @@ version = (
0,
10,
0,
'alpha.26',
'alpha.22',
)
version_object = semver.VersionInfo(

View File

@@ -41,8 +41,3 @@ COPY cic-eth/tests/ tests/
COPY cic-eth/config/ /usr/local/etc/cic-eth/
COPY cic-eth/cic_eth/db/migrations/ /usr/local/share/cic-eth/alembic/
COPY cic-eth/crypto_dev_signer_config/ /usr/local/etc/crypto-dev-signer/
RUN apt-get install -y git && \
git clone https://gitlab.com/grassrootseconomics/cic-contracts.git && \
mkdir -p /usr/local/share/cic/solidity && \
cp -R cic-contracts/abis /usr/local/share/cic/solidity/abi

View File

@@ -1,6 +1,4 @@
#!/bin/bash
set -e
>&2 echo executing database migration
migrate.py -c /usr/local/etc/cic-eth --migrations-dir /usr/local/share/cic-eth/alembic -vv
set +e

View File

@@ -1,6 +1,5 @@
#!/bin/bash
set -e
. ./db.sh
# set CONFINI_ENV_PREFIX to override the env prefix to override env vars
@@ -28,4 +27,3 @@ while true; do
sleep 15;
done
set +e

View File

@@ -1,17 +1,16 @@
web3==5.12.2
celery==4.4.7
crypto-dev-signer~=0.4.13rc2
crypto-dev-signer~=0.4.13b13
confini~=0.3.6b1
cic-registry~=0.5.3a12
cic-registry~=0.5.3a10
cic-bancor~=0.0.6
redis==3.5.3
alembic==1.4.2
websockets==8.1
requests~=2.24.0
eth_accounts_index~=0.0.10a7
erc20-transfer-authorization~=0.3.0a7
erc20-single-shot-faucet~=0.2.0a6
eth-address-index~=0.1.0a8
eth_accounts_index~=0.0.10a5
erc20-approval-escrow~=0.3.0a3
erc20-single-shot-faucet~=0.2.0a4
rlp==2.0.1
uWSGI==2.0.19.1
semver==2.13.0

View File

@@ -40,15 +40,10 @@ scripts =
[options.entry_points]
console_scripts =
# daemons
cic-eth-tasker = cic_eth.runnable.tasker:main
cic-eth-manager = cic_eth.runnable.manager:main
cic-eth-tag = cic_eth.runnable.tag:main
cic-eth-dispatcher = cic_eth.runnable.dispatcher:main
cic-eth-retrier = cic_eth.runnable.retry:main
# tools
cic-eth-create = cic_eth.runnable.create:main
cic-eth-inspect = cic_eth.runnable.view:main
cic-eth-ctl = cic_eth.runnable.ctrl:main
# TODO: Merge this with ctl when subcmds sorted to submodules
cic-eth-tag = cic_eth.runnable.tag:main
cic-eth-resend = cic_eth.runnable.resend:main

View File

@@ -10,11 +10,7 @@ import web3
# local imports
from cic_eth.api import AdminApi
from cic_eth.db.models.role import AccountRole
from cic_eth.db.enum import (
StatusEnum,
StatusBits,
status_str,
)
from cic_eth.db.enum import StatusEnum
from cic_eth.error import InitializationError
from cic_eth.eth.task import sign_and_register_tx
from cic_eth.eth.tx import cache_gas_refill_data
@@ -68,11 +64,7 @@ def test_resend_inplace(
api = AdminApi(c, queue=None)
t = api.resend(tx_dict['hash'], chain_str, unlock=True)
t.get()
i = 0
tx_hash_new_hex = None
for r in t.collect():
tx_hash_new_hex = r[1]
tx_hash_new_hex = t.get()
assert t.successful()
tx_raw_new = get_tx(tx_hash_new_hex)
@@ -82,144 +74,142 @@ def test_resend_inplace(
assert tx_dict_new['gasPrice'] > gas_price_before
tx_dict_after = get_tx(tx_dict['hash'])
logg.debug('logggg {}'.format(status_str(tx_dict_after['status'])))
assert tx_dict_after['status'] & StatusBits.MANUAL
assert tx_dict_after['status'] == StatusEnum.OVERRIDDEN
#def test_check_fix_nonce(
# default_chain_spec,
# init_database,
# init_eth_account_roles,
# init_w3,
# eth_empty_accounts,
# celery_session_worker,
# ):
#
# chain_str = str(default_chain_spec)
#
# sigs = []
# for i in range(5):
# s = celery.signature(
# 'cic_eth.eth.tx.refill_gas',
# [
# eth_empty_accounts[i],
# chain_str,
# ],
# queue=None,
# )
# sigs.append(s)
#
# t = celery.group(sigs)()
# txs = t.get()
# assert t.successful()
#
# tx_hash = web3.Web3.keccak(hexstr=txs[2])
# c = RpcClient(default_chain_spec)
# api = AdminApi(c, queue=None)
# address = init_eth_account_roles['eth_account_gas_provider']
# nonce_spec = api.check_nonce(address)
# assert nonce_spec['nonce']['network'] == 0
# assert nonce_spec['nonce']['queue'] == 4
# assert nonce_spec['nonce']['blocking'] == None
#
# s_set = celery.signature(
# 'cic_eth.queue.tx.set_rejected',
# [
# tx_hash.hex(),
# ],
# queue=None,
# )
# t = s_set.apply_async()
# t.get()
# t.collect()
# assert t.successful()
#
#
# nonce_spec = api.check_nonce(address)
# assert nonce_spec['nonce']['blocking'] == 2
# assert nonce_spec['tx']['blocking'] == tx_hash.hex()
#
# t = api.fix_nonce(address, nonce_spec['nonce']['blocking'])
# t.get()
# t.collect()
# assert t.successful()
#
# for tx in txs[3:]:
# tx_hash = web3.Web3.keccak(hexstr=tx)
# tx_dict = get_tx(tx_hash.hex())
# assert tx_dict['status'] == StatusEnum.OVERRIDDEN
#
#
#def test_tag_account(
# init_database,
# eth_empty_accounts,
# init_rpc,
# ):
#
# api = AdminApi(init_rpc)
#
# api.tag_account('foo', eth_empty_accounts[0])
# api.tag_account('bar', eth_empty_accounts[1])
# api.tag_account('bar', eth_empty_accounts[2])
#
# assert AccountRole.get_address('foo') == eth_empty_accounts[0]
# assert AccountRole.get_address('bar') == eth_empty_accounts[2]
#
#
#def test_ready(
# init_database,
# eth_empty_accounts,
# init_rpc,
# w3,
# ):
#
# api = AdminApi(init_rpc)
#
# with pytest.raises(InitializationError):
# api.ready()
#
# bogus_account = os.urandom(20)
# bogus_account_hex = '0x' + bogus_account.hex()
#
# api.tag_account('ETH_GAS_PROVIDER_ADDRESS', web3.Web3.toChecksumAddress(bogus_account_hex))
# with pytest.raises(KeyError):
# api.ready()
#
# api.tag_account('ETH_GAS_PROVIDER_ADDRESS', eth_empty_accounts[0])
# api.ready()
#
#
#def test_tx(
# default_chain_spec,
# cic_registry,
# init_database,
# init_rpc,
# init_w3,
# celery_session_worker,
# ):
#
# tx = {
# 'from': init_w3.eth.accounts[0],
# 'to': init_w3.eth.accounts[1],
# 'nonce': 42,
# 'gas': 21000,
# 'gasPrice': 1000000,
# 'value': 128,
# 'chainId': default_chain_spec.chain_id(),
# 'data': '',
# }
#
# (tx_hash_hex, tx_signed_raw_hex) = sign_tx(tx, str(default_chain_spec))
# queue_create(
# tx['nonce'],
# tx['from'],
# tx_hash_hex,
# tx_signed_raw_hex,
# str(default_chain_spec),
# )
# tx_recovered = unpack_signed_raw_tx(bytes.fromhex(tx_signed_raw_hex[2:]), default_chain_spec.chain_id())
# cache_gas_refill_data(tx_hash_hex, tx_recovered)
#
# api = AdminApi(init_rpc, queue=None)
# tx = api.tx(default_chain_spec, tx_hash=tx_hash_hex)
def test_check_fix_nonce(
default_chain_spec,
init_database,
init_eth_account_roles,
init_w3,
eth_empty_accounts,
celery_session_worker,
):
chain_str = str(default_chain_spec)
sigs = []
for i in range(5):
s = celery.signature(
'cic_eth.eth.tx.refill_gas',
[
eth_empty_accounts[i],
chain_str,
],
queue=None,
)
sigs.append(s)
t = celery.group(sigs)()
txs = t.get()
assert t.successful()
tx_hash = web3.Web3.keccak(hexstr=txs[2])
c = RpcClient(default_chain_spec)
api = AdminApi(c, queue=None)
address = init_eth_account_roles['eth_account_gas_provider']
nonce_spec = api.check_nonce(address)
assert nonce_spec['nonce']['network'] == 0
assert nonce_spec['nonce']['queue'] == 4
assert nonce_spec['nonce']['blocking'] == None
s_set = celery.signature(
'cic_eth.queue.tx.set_rejected',
[
tx_hash.hex(),
],
queue=None,
)
t = s_set.apply_async()
t.get()
t.collect()
assert t.successful()
nonce_spec = api.check_nonce(address)
assert nonce_spec['nonce']['blocking'] == 2
assert nonce_spec['tx']['blocking'] == tx_hash.hex()
t = api.fix_nonce(address, nonce_spec['nonce']['blocking'])
t.get()
t.collect()
assert t.successful()
for tx in txs[3:]:
tx_hash = web3.Web3.keccak(hexstr=tx)
tx_dict = get_tx(tx_hash.hex())
assert tx_dict['status'] == StatusEnum.OVERRIDDEN
def test_tag_account(
init_database,
eth_empty_accounts,
init_rpc,
):
api = AdminApi(init_rpc)
api.tag_account('foo', eth_empty_accounts[0])
api.tag_account('bar', eth_empty_accounts[1])
api.tag_account('bar', eth_empty_accounts[2])
assert AccountRole.get_address('foo') == eth_empty_accounts[0]
assert AccountRole.get_address('bar') == eth_empty_accounts[2]
def test_ready(
init_database,
eth_empty_accounts,
init_rpc,
w3,
):
api = AdminApi(init_rpc)
with pytest.raises(InitializationError):
api.ready()
bogus_account = os.urandom(20)
bogus_account_hex = '0x' + bogus_account.hex()
api.tag_account('ETH_GAS_PROVIDER_ADDRESS', web3.Web3.toChecksumAddress(bogus_account_hex))
with pytest.raises(KeyError):
api.ready()
api.tag_account('ETH_GAS_PROVIDER_ADDRESS', eth_empty_accounts[0])
api.ready()
def test_tx(
default_chain_spec,
cic_registry,
init_database,
init_rpc,
init_w3,
celery_session_worker,
):
tx = {
'from': init_w3.eth.accounts[0],
'to': init_w3.eth.accounts[1],
'nonce': 42,
'gas': 21000,
'gasPrice': 1000000,
'value': 128,
'chainId': default_chain_spec.chain_id(),
'data': '',
}
(tx_hash_hex, tx_signed_raw_hex) = sign_tx(tx, str(default_chain_spec))
queue_create(
tx['nonce'],
tx['from'],
tx_hash_hex,
tx_signed_raw_hex,
str(default_chain_spec),
)
tx_recovered = unpack_signed_raw_tx(bytes.fromhex(tx_signed_raw_hex[2:]), default_chain_spec.chain_id())
cache_gas_refill_data(tx_hash_hex, tx_recovered)
api = AdminApi(init_rpc, queue=None)
tx = api.tx(default_chain_spec, tx_hash=tx_hash_hex)

View File

@@ -10,10 +10,7 @@ from cic_registry import zero_address
# local imports
from cic_eth.db.models.otx import Otx
from cic_eth.db.models.tx import TxCache
from cic_eth.db.enum import (
StatusEnum,
StatusBits,
)
from cic_eth.db.enum import StatusEnum
logg = logging.getLogger()
@@ -172,9 +169,6 @@ def test_status_fubar(
)
t = s.apply_async()
t.get()
for n in t.collect():
pass
assert t.successful()
otx = Otx.load(tx_hash)
assert otx.status & StatusBits.UNKNOWN_ERROR
init_database.refresh(otx)
assert otx.status == StatusEnum.FUBAR

View File

@@ -8,11 +8,7 @@ import celery
# local imports
from cic_eth.db.models.base import SessionBase
from cic_eth.db.models.otx import Otx
from cic_eth.db.enum import (
StatusEnum,
StatusBits,
is_error_status,
)
from cic_eth.db.enum import StatusEnum
from cic_eth.eth.task import sign_and_register_tx
logg = logging.getLogger()
@@ -105,7 +101,7 @@ def test_states_failed(
otx = init_database.query(Otx).filter(Otx.tx_hash==tx_hash_hex).first()
otx.sendfail(session=init_database)
init_database.add(otx)
init_database.commit()
s = celery.signature(
@@ -125,9 +121,5 @@ def test_states_failed(
pass
assert t.successful()
init_database.commit()
otx = init_database.query(Otx).filter(Otx.tx_hash==tx_hash_hex).first()
assert otx.status & StatusEnum.RETRY == StatusEnum.RETRY
#assert otx.status & StatusBits.QUEUED
assert is_error_status(otx.status)
assert otx.status == StatusEnum.RETRY.value

View File

@@ -24,6 +24,7 @@ class Response:
status = 200
@pytest.mark.skip()
def test_callback_http(
celery_session_worker,
mocker,
@@ -42,6 +43,7 @@ def test_callback_http(
t.get()
@pytest.mark.skip()
def test_callback_tcp(
celery_session_worker,
):

View File

@@ -1,20 +0,0 @@
from cic_eth.db.enum import (
StatusEnum,
StatusBits,
status_str,
)
def test_status_str():
# String representation for a status in StatusEnum
s = status_str(StatusEnum.REVERTED)
assert s == 'REVERTED'
# String representation for a status not in StatusEnum
s = status_str(StatusBits.LOCAL_ERROR | StatusBits.NODE_ERROR)
assert s == 'LOCAL_ERROR,NODE_ERROR*'
# String representation for a status in StatusEnum, but bits only representation bit set
s = status_str(StatusEnum.REVERTED, bits_only=True)
assert s == 'IN_NETWORK,NETWORK_ERROR,FINAL'

View File

@@ -9,11 +9,7 @@ import pytest
from cic_eth.db.models.base import SessionBase
from cic_eth.db.models.otx import OtxStateLog
from cic_eth.db.models.otx import Otx
from cic_eth.db.enum import (
StatusEnum,
StatusBits,
is_alive,
)
from cic_eth.db.enum import StatusEnum
logg = logging.getLogger()
@@ -74,24 +70,15 @@ def test_state_log(
otx = Otx.add(0, address, tx_hash, signed_tx, session=init_database)
otx.waitforgas(session=init_database)
init_database.commit()
otx.readysend(session=init_database)
init_database.commit()
otx.sent(session=init_database)
init_database.commit()
otx.success(1024, session=init_database)
init_database.commit()
q = init_database.query(OtxStateLog)
q = q.filter(OtxStateLog.otx_id==otx.id)
q = q.order_by(OtxStateLog.date.asc())
logs = q.all()
assert logs[0].status == StatusEnum.PENDING
assert logs[1].status == StatusEnum.WAITFORGAS
assert logs[2].status & StatusBits.QUEUED
assert logs[3].status & StatusBits.IN_NETWORK
assert not is_alive(logs[4].status)
assert logs[2].status == StatusEnum.SENT
assert logs[3].status == StatusEnum.SUCCESS

View File

@@ -0,0 +1,55 @@
# standard imports
import logging
# third-party imports
import pytest
# local imports
from cic_eth.db import Otx
from cic_eth.db.error import TxStateChangeError
logg = logging.getLogger()
# Check that invalid transitions throw exceptions
# sent
def test_db_queue_states(
init_database,
):
session = init_database
# these values are completely arbitary
tx_hash = '0xF182DFA3AD48723E7E222FE7B4C2C44C23CD4D7FF413E8999DFA15ECE53F'
address = '0x38C5559D6EDDDA1F705D3AB1A664CA1B397EB119'
signed_tx = '0xA5866A5383249AE843546BDA46235A1CA1614F538FB486140693C2EF1956FC53213F6AEF0F99F44D7103871AF3A12B126DCF9BFB7AF11143FAB3ECE2B452EE35D1320C4C7C6F999C8DF4EB09E729715B573F6672ED852547F552C4AE99D17DCD14C810'
o = Otx(
nonce=42,
address=address[2:],
tx_hash=tx_hash[2:],
signed_tx=signed_tx[2:],
)
session.add(o)
session.commit()
o.sent(session=session)
session.commit()
# send after sent is ok
o.sent(session=session)
session.commit()
o.sendfail(session=session)
session.commit()
with pytest.raises(TxStateChangeError):
o.sendfail(session=session)
o.sent(session=session)
session.commit()
o.minefail(1234, session=session)
session.commit()
with pytest.raises(TxStateChangeError):
o.sent(session=session)

View File

@@ -1,97 +0,0 @@
# standard imports
import os
# third-party imports
import pytest
# local imports
from cic_eth.db.models.otx import Otx
from cic_eth.db.enum import (
StatusEnum,
StatusBits,
is_alive,
)
@pytest.fixture(scope='function')
def otx(
init_database,
):
bogus_hash = '0x' + os.urandom(32).hex()
bogus_address = '0x' + os.urandom(20).hex()
bogus_tx_raw = '0x' + os.urandom(128).hex()
return Otx(0, bogus_address, bogus_hash, bogus_tx_raw)
def test_status_chain_gas(
init_database,
otx,
):
otx.waitforgas(init_database)
otx.readysend(init_database)
otx.sent(init_database)
otx.success(1024, init_database)
assert not is_alive(otx.status)
def test_status_chain_straight_success(
init_database,
otx,
):
otx.readysend(init_database)
otx.sent(init_database)
otx.success(1024, init_database)
assert not is_alive(otx.status)
def test_status_chain_straight_revert(
init_database,
otx,
):
otx.readysend(init_database)
otx.sent(init_database)
otx.minefail(1024, init_database)
assert not is_alive(otx.status)
def test_status_chain_nodeerror(
init_database,
otx,
):
otx.readysend(init_database)
otx.sendfail(init_database)
otx.retry(init_database)
otx.sent(init_database)
otx.success(1024, init_database)
assert not is_alive(otx.status)
def test_status_chain_nodeerror_multiple(
init_database,
otx,
):
otx.readysend(init_database)
otx.sendfail(init_database)
otx.retry(init_database)
otx.sendfail(init_database)
otx.retry(init_database)
otx.sent(init_database)
otx.success(1024, init_database)
assert not is_alive(otx.status)
def test_status_chain_nodeerror(
init_database,
otx,
):
otx.readysend(init_database)
otx.reject(init_database)
assert not is_alive(otx.status)

View File

@@ -16,14 +16,8 @@ from cic_eth.db.models.otx import OtxSync
from cic_eth.db.models.tx import TxCache
from cic_eth.db.models.lock import Lock
from cic_eth.db.models.base import SessionBase
from cic_eth.db.enum import (
StatusEnum,
LockEnum,
StatusBits,
is_alive,
is_error_status,
status_str,
)
from cic_eth.db.enum import StatusEnum
from cic_eth.db.enum import LockEnum
from cic_eth.queue.tx import create as queue_create
from cic_eth.queue.tx import set_final_status
from cic_eth.queue.tx import set_sent_status
@@ -69,14 +63,13 @@ def test_finalize(
set_sent_status(tx_hash.hex())
otx = init_database.query(Otx).filter(Otx.tx_hash==tx_hashes[0]).first()
assert otx.status & StatusBits.OBSOLETE
assert not is_alive(otx.status)
assert otx.status == StatusEnum.OBSOLETED
otx = init_database.query(Otx).filter(Otx.tx_hash==tx_hashes[1]).first()
assert otx.status & StatusBits.OBSOLETE
assert otx.status == StatusEnum.OBSOLETED
otx = init_database.query(Otx).filter(Otx.tx_hash==tx_hashes[2]).first()
assert otx.status & StatusBits.OBSOLETE
assert otx.status == StatusEnum.OBSOLETED
otx = init_database.query(Otx).filter(Otx.tx_hash==tx_hashes[3]).first()
assert otx.status == StatusEnum.PENDING
@@ -89,22 +82,19 @@ def test_finalize(
set_final_status(tx_hashes[3], 1024)
otx = init_database.query(Otx).filter(Otx.tx_hash==tx_hashes[0]).first()
assert otx.status & (StatusBits.OBSOLETE | StatusBits.FINAL)
assert not is_alive(otx.status)
assert otx.status == StatusEnum.CANCELLED
otx = init_database.query(Otx).filter(Otx.tx_hash==tx_hashes[1]).first()
assert otx.status & (StatusBits.OBSOLETE | StatusBits.FINAL)
assert otx.status == StatusEnum.CANCELLED
otx = init_database.query(Otx).filter(Otx.tx_hash==tx_hashes[2]).first()
assert otx.status & (StatusBits.OBSOLETE | StatusBits.FINAL)
assert otx.status == StatusEnum.CANCELLED
otx = init_database.query(Otx).filter(Otx.tx_hash==tx_hashes[3]).first()
assert otx.status & (StatusBits.IN_NETWORK | StatusBits.FINAL)
assert not is_error_status(otx.status)
assert otx.status == StatusEnum.SUCCESS
otx = init_database.query(Otx).filter(Otx.tx_hash==tx_hashes[4]).first()
assert otx.status & (StatusBits.IN_NETWORK | StatusBits.FINAL)
assert not is_error_status(otx.status)
assert otx.status == StatusEnum.SENT
def test_expired(
@@ -414,7 +404,7 @@ def test_obsoletion(
session = SessionBase.create_session()
q = session.query(Otx)
q = q.filter(Otx.status.op('&')(StatusEnum.OBSOLETED.value)==StatusEnum.OBSOLETED.value)
q = q.filter(Otx.status==StatusEnum.OBSOLETED)
z = 0
for o in q.all():
z += o.nonce
@@ -426,13 +416,13 @@ def test_obsoletion(
session = SessionBase.create_session()
q = session.query(Otx)
q = q.filter(Otx.status.op('&')(StatusEnum.CANCELLED.value)==StatusEnum.OBSOLETED.value)
q = q.filter(Otx.status==StatusEnum.OBSOLETED)
zo = 0
for o in q.all():
zo += o.nonce
q = session.query(Otx)
q = q.filter(Otx.status.op('&')(StatusEnum.CANCELLED.value)==StatusEnum.CANCELLED.value)
q = q.filter(Otx.status==StatusEnum.CANCELLED)
zc = 0
for o in q.all():
zc += o.nonce
@@ -460,20 +450,16 @@ def test_retry(
q = q.filter(Otx.tx_hash==tx_hash)
otx = q.first()
assert (otx.status & StatusEnum.RETRY.value) == StatusEnum.RETRY.value
assert is_error_status(otx.status)
assert otx.status == StatusEnum.RETRY
set_sent_status(tx_hash, False)
set_ready(tx_hash)
init_database.commit()
q = init_database.query(Otx)
q = q.filter(Otx.tx_hash==tx_hash)
otx = q.first()
assert (otx.status & StatusEnum.RETRY.value) == StatusBits.QUEUED.value
assert not is_error_status(otx.status)
assert otx.status == StatusEnum.RETRY
def test_get_account_tx(

1
apps/cic-meta Submodule

Submodule apps/cic-meta added at 76e8b80965

View File

@@ -1,14 +0,0 @@
[database]
#name = cic-meta
#engine = postgres
#user = postgres
#password = password
#host = localhost
#port = 5432
name = /tmp/cicmeta.sqlite
engine = sqlite
user =
password =
host =
port =
schema_sql_path = server.sqlite.sql

View File

@@ -1,7 +0,0 @@
[pgp]
exports_dir = pgp
privatekey_file = privatekeys.asc
passphrase = merman
publickey_trusted_file = publickeys.asc
publickey_active_file = publickeys.asc
publickey_encrypt_file = publickeys.asc

View File

@@ -1,3 +0,0 @@
[server]
address = 0.0.0.0
port = 7777

View File

@@ -1,5 +0,0 @@
node_modules
dist
dist-web
scratch
tests

View File

@@ -1,23 +0,0 @@
.cic_meta_variables:
variables:
APP_NAME: cic-meta
DOCKERFILE_PATH: $APP_NAME/docker/Dockerfile
.cic_meta_changes_target:
rules:
- changes:
- $CONTEXT/$APP_NAME/*
build-mr-cic-meta:
extends:
- .cic_meta_changes_target
- .py_build_merge_request
- .cic_meta_variables
build-push-cic-meta:
extends:
- .py_build_push
- .cic_meta_variables

View File

@@ -1,10 +0,0 @@
* 0.0.6
- Add server build
* 0.0.5
- Set build on install
* 0.0.4
- Change phone key generator to arbitrary value input
* 0.0.3
- Add asset key generator
- Add crypto polyfill (node uses native crypto, web uses webcrypto)
- Add phone asset

View File

@@ -1,28 +0,0 @@
FROM node:15.3.0-alpine3.10
WORKDIR /tmp/src/cic-meta
COPY cic-meta/package.json \
cic-meta/package-lock.json \
./
RUN npm install
COPY cic-meta/src/ src/
COPY cic-meta/tests/ tests/
COPY cic-meta/scripts/ scripts/
#COPY docker/*.sh /root/
RUN alias tsc=node_modules/typescript/bin/tsc
COPY cic-meta/.config/ /usr/local/etc/cic-meta/
# COPY cic-meta/scripts/server/initdb/server.postgres.sql /usr/local/share/cic-meta/sql/server.sql
COPY cic-meta/docker/db.sh ./db.sh
RUN chmod 755 ./db.sh
RUN alias ts-node=/tmp/src/cic-meta/node_modules/ts-node/dist/bin.js
ENTRYPOINT [ "./node_modules/ts-node/dist/bin.js", "./scripts/server/server.ts" ]
# COPY cic-meta/docker/start_server.sh ./start_server.sh
# RUN chmod 755 ./start_server.sh

View File

@@ -1,3 +0,0 @@
#!/bin/bash
PGPASSWORD=$DATABASE_PASSWORD psql -U $DATABASE_USER -h $DATABASE_HOST -p $DATABASE_PORT -d $DATABASE_NAME /usr/local/share/cic-meta/sql/server.sql

View File

@@ -1,3 +0,0 @@
sh ./db.sh
/usr/local/bin/node /usr/local/bin/cic-meta-server $@

View File

@@ -1,98 +0,0 @@
import sys
import os
import json
import logging
from urllib.request import Request, urlopen
import gnupg
logging.basicConfig(level=logging.DEBUG)
logg = logging.getLogger()
host = os.environ.get('CIC_META_URL', 'http://localhost:63380')
if len(sys.argv) < 2:
sys.stderr.write('Usage: {} <path-to-gpg-private-key>\n'.format(sys.argv[0]))
sys.exit(1)
# Import PGP key used to sign the data submission
gpg = gnupg.GPG(gnupghome='/tmp/.gpg')
f = open(sys.argv[1], 'r')
key_data = f.read()
f.close()
gpg.import_keys(key_data)
gpgk = gpg.list_keys()
algo = gpgk[0]['algo']
logg.info('using signing key {} algo {}'.format(gpgk[0]['keyid'], algo))
def main():
# Random key to associate with value
# (typically this is some deterministic identifier like sha256(<ethaddress>:cic-person)
k = os.urandom(32).hex()
url = os.path.join(host, k)
# Headers required for server-assisted merge operations
headers = {
'X-CIC-AUTOMERGE': 'server',
'Content-Type': 'application/json',
}
# Data to merge
data_dict = {
'foo': 'bar',
'xyzzy': 42,
}
# Send request to server to get initial automerge object and signing material
# Server will reply with current state of object merged with ours, but (obviously)
# still without a signature.
data = json.dumps(data_dict).encode('utf-8')
req = Request(url, headers=headers, data=data, method='POST')
rs = urlopen(req)
logg.info('get sign material response status: {}'.format(rs.status))
if rs.status != 200:
raise RuntimeError('request failed: {}'.format(rs.reason))
# Sign the provided digest
data = rs.read()
e = json.loads(data)
sig = gpg.sign(e['digest'], passphrase='ge', keyid=gpgk[0]['keyid'])
# Format data for the content storage request
data = {
'm': data.decode('utf-8'),
's': {
'engine': 'pgp',
'algo': algo,
'data': str(sig),
'digest': e['digest'],
},
}
# Send storage request to server
data = json.dumps(data).encode('utf-8')
req = Request(url, headers=headers, data=data, method='PUT')
rs = urlopen(req)
logg.info('signed content submissionstatus: {}'.format(rs.status))
if rs.status != 200:
raise RuntimeError('request failed: {}'.format(rs.reason))
# Get the latest stored version of the data (without the merge graph)
req = Request(url, method='GET')
rs = urlopen(req)
logg.info('get latest data status: {}'.format(rs.status))
if rs.status != 200:
raise RuntimeError('request failed: {}'.format(rs.reason))
print(rs.read().decode('utf-8'))
if __name__ == '__main__':
main()

File diff suppressed because it is too large Load Diff

View File

@@ -1,41 +0,0 @@
{
"name": "cic-client-meta",
"version": "0.0.6",
"description": "Signed CRDT metadata graphs for the CIC network",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"scripts": {
"test": "mocha -r node_modules/node-localstorage/register -r ts-node/register tests/*.ts",
"build": "node_modules/typescript/bin/tsc -d --outDir dist",
"build-server": "tsc -d --outDir dist-server scripts/server/*.ts",
"pack": "node_modules/typescript/bin/tsc -d --outDir dist && webpack",
"clean": "rm -rf dist"
},
"bin": {
"cic-meta-server": "./dist-server/scripts/server/server.js"
},
"dependencies": {
"@ethereumjs/tx": "^3.0.0-beta.1",
"automerge": "^0.14.1",
"ethereumjs-wallet": "^1.0.1",
"ini": "^1.3.5",
"openpgp": "^4.10.8",
"pg": "^8.4.2",
"sqlite3": "^5.0.0",
"yargs": "^16.1.0"
},
"devDependencies": {
"@types/mocha": "^8.0.3",
"mocha": "^8.2.0",
"node-localstorage": "^2.1.6",
"ts-node": "^9.0.0",
"typescript": "^4.0.5",
"webpack": "^5.4.0",
"webpack-cli": "^4.2.0"
},
"author": "Louis Holbrook <dev@holbrook.no>",
"license": "GPL-3.0-or-later",
"engines": {
"node": "~15.3.0"
}
}

View File

@@ -1,20 +0,0 @@
const config = require('./src/config');
const fs = require('fs');
if (process.argv[2] === undefined) {
process.stderr.write('Usage: node dumpConfig.js <configdir>\n');
process.exit(1);
}
try {
const stat = fs.statSync(process.argv[2]);
if (!stat.isDirectory()) {
throw 'not a directory';
}
} catch {
process.stderr.write('Not a directory: ' + process.argv[2] + '\n');
process.exit(1);
}
const c = new config.Config(process.argv[2], process.env['CONFINI_ENV_PREFIX']);
c.process();
process.stdout.write(c.toString());

View File

@@ -1,15 +0,0 @@
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username grassroots --dbname cic_meta <<-EOSQL
create table if not exists store (
id serial primary key not null,
owner_fingerprint text not null,
hash char(64) not null unique,
content text not null
);
create index if not exists idx_fp on store ((lower(owner_fingerprint)));
EOSQL

View File

@@ -1,8 +0,0 @@
create table if not exists cic_meta.store (
id serial primary key not null,
owner_fingerprint text not null,
hash char(64) not null unique,
content text not null
);
create index if not exists idx_fp on store ((lower(owner_fingerprint)));

View File

@@ -1,9 +0,0 @@
create table if not exists store (
/*id serial primary key not null,*/
id integer primary key autoincrement,
owner_fingerprint text not null,
hash char(64) not null unique,
content text not null
);
create index if not exists idx_fp on store ((lower(owner_fingerprint)));

View File

@@ -1,9 +0,0 @@
create table if not exists store (
/*id serial primary key not null,*/
id integer primary key autoincrement,
owner_fingerprint text not null,
hash char(64) not null unique,
content text not null
);
create index if not exists idx_fp on store ((lower(owner_fingerprint)));

View File

@@ -1,28 +0,0 @@
const args = require('yargs');
const standardArgs = args.option('config', {
alias: 'c',
type: 'string',
description: 'absolute path to configuation files directory',
}).option('env-prefix', {
type: 'string',
description: 'prefix to add to environment variables to match configuration directives',
}).option('database-engine', {
type: 'string',
description: 'database engines to use',
}).option('address', {
alias: 'a',
type: 'string',
description: 'ip address to bind server to',
}).option('server-address', {
alias: 'p',
type: 'number',
description: 'port to bind server to',
});
export { standardArgs };

View File

@@ -1,211 +0,0 @@
import * as Automerge from 'automerge';
import * as pgp from 'openpgp';
import * as pg from 'pg';
import { Envelope, Syncable } from '../../src/sync';
function handleNoMergeGet(db, digest, keystore) {
const sql = "SELECT content FROM store WHERE hash = '" + digest + "'";
return new Promise<string|boolean>((whohoo, doh) => {
db.query(sql, (e, rs) => {
if (e !== null && e !== undefined) {
doh(e);
return;
} else if (rs.rowCount == 0) {
whohoo(false);
return;
}
const cipherText = rs.rows[0]['content'];
pgp.message.readArmored(cipherText).then((m) => {
const opts = {
message: m,
privateKeys: [keystore.getPrivateKey()],
};
pgp.decrypt(opts).then((plainText) => {
const o = Syncable.fromJSON(plainText.data);
const r = JSON.stringify(o.m['data']);
whohoo(r);
}).catch((e) => {
console.error('decrypt', e);
doh(e);
});
}).catch((e) => {
console.error('mesage', e);
doh(e);
});
})
});
}
// TODO: add input for change description
function handleServerMergePost(data, db, digest, keystore, signer) {
return new Promise<string>((whohoo, doh) => {
const o = JSON.parse(data);
const cipherText = handleClientMergeGet(db, digest, keystore).then(async (v) => {
let e = undefined;
let s = undefined;
if (v === undefined) {
s = new Syncable(digest, data);
s.onwrap = (e) => {
whohoo(e.toJSON());
};
digest = s.digest();
s.wrap({
digest: digest,
});
} else {
e = Envelope.fromJSON(v);
s = e.unwrap();
s.replace(o, 'server merge');
e.set(s);
s.onwrap = (e) => {
whohoo(e.toJSON());
}
digest = s.digest();
s.wrap({
digest: digest,
});
}
});
});
}
// TODO: this still needs to merge with the stored version
function handleServerMergePut(data, db, digest, keystore, signer) {
return new Promise<boolean>((whohoo, doh) => {
const wrappedData = JSON.parse(data);
if (wrappedData.s === undefined) {
doh('signature missing');
return;
}
const e = Envelope.fromJSON(wrappedData.m);
let s = undefined;
try {
s = e.unwrap();
} catch(e) {
console.error(e)
whohoo(undefined);
}
// TODO: we probably should expose method for replacing the signature, this is too intrusive
s.m = Automerge.change(s.m, 'sign', (doc) => {
doc['signature'] = wrappedData.s;
});
s.setSigner(signer);
s.onauthenticate = (v) => {
console.log('vvv', v);
if (!v) {
whohoo(undefined);
return;
}
const opts = {
message: pgp.message.fromText(s.toJSON()),
publicKeys: keystore.getEncryptKeys(),
};
pgp.encrypt(opts).then((cipherText) => {
const sql = "INSERT INTO store (owner_fingerprint, hash, content) VALUES ('" + signer.fingerprint() + "', '" + digest + "', '" + cipherText.data + "') ON CONFLICT (hash) DO UPDATE SET content = EXCLUDED.content;";
db.query(sql, (e, rs) => {
if (e !== null && e !== undefined) {
doh(e);
return;
}
whohoo(true);
});
});
};
s.authenticate(true)
});
}
function handleClientMergeGet(db, digest, keystore) {
const sql = "SELECT content FROM store WHERE hash = '" + digest + "'";
return new Promise<string>((whohoo, doh) => {
db.query(sql, (e, rs) => {
console.log('rs', e, rs);
if (e !== null && e !== undefined) {
doh(e);
return;
} else if (rs.rowCount == 0) { // TODO fix the postgres/sqlite method name issues, this will now break on postgres
whohoo(undefined);
return;
}
const cipherText = rs.rows[0]['content'];
pgp.message.readArmored(cipherText).then((m) => {
const opts = {
message: m,
privateKeys: [keystore.getPrivateKey()],
};
pgp.decrypt(opts).then((plainText) => {
const o = Syncable.fromJSON(plainText.data);
const e = new Envelope(o);
whohoo(e.toJSON());
}).catch((e) => {
console.error('decrypt', e);
doh(e);
});
}).catch((e) => {
console.error('mesage', e);
doh(e);
});
});
});
}
// TODO: this still needs to merge with the stored version
function handleClientMergePut(data, db, digest, keystore, signer) {
return new Promise<boolean>((whohoo, doh) => {
let s = undefined;
try {
const e = Envelope.fromJSON(data);
s = e.unwrap();
} catch(e) {
whohoo(false);
console.error(e)
return;
}
s.setSigner(signer);
s.onauthenticate = (v) => {
if (!v) {
whohoo(false);
return;
}
handleClientMergeGet(db, digest, keystore).then((v) => {
if (v !== undefined) {
const env = Envelope.fromJSON(v);
s.merge(env.unwrap());
}
const opts = {
message: pgp.message.fromText(s.toJSON()),
publicKeys: keystore.getEncryptKeys(),
};
pgp.encrypt(opts).then((cipherText) => {
const sql = "INSERT INTO store (owner_fingerprint, hash, content) VALUES ('" + signer.fingerprint() + "', '" + digest + "', '" + cipherText.data + "') ON CONFLICT (hash) DO UPDATE SET content = EXCLUDED.content;";
db.query(sql, (e, rs) => {
if (e !== null && e !== undefined) {
doh(e);
return;
}
whohoo(true);
});
}).catch((e) => {
doh(e);
});
});
};
s.authenticate(true)
});
}
export {
handleClientMergePut,
handleClientMergeGet,
handleServerMergePost,
handleServerMergePut,
handleNoMergeGet,
};

View File

@@ -1,207 +0,0 @@
import * as http from 'http';
import * as fs from 'fs';
import * as path from 'path';
import * as pgp from 'openpgp';
import * as handlers from './handlers';
import { Envelope, Syncable } from '../../src/sync';
import { PGPKeyStore, PGPSigner } from '../../src/auth';
import { standardArgs } from './args';
import { Config } from '../../src/config';
import { SqliteAdapter, PostgresAdapter } from '../../src/db';
let configPath = '/usr/local/etc/cic-meta';
const argv = standardArgs.argv;
if (argv['config'] !== undefined) {
configPath = argv['config'];
}
const config = new Config(configPath, argv['env-prefix']);
config.process();
console.debug(config.toString());
let fp = path.join(config.get('PGP_EXPORTS_DIR'), config.get('PGP_PUBLICKEY_ACTIVE_FILE'));
const pubksa = fs.readFileSync(fp, 'utf-8');
fp = path.join(config.get('PGP_EXPORTS_DIR'), config.get('PGP_PRIVATEKEY_FILE'));
const pksa = fs.readFileSync(fp, 'utf-8');
const dbConfig = {
'name': config.get('DATABASE_NAME'),
'user': config.get('DATABASE_USER'),
'password': config.get('DATABASE_PASSWORD'),
'host': config.get('DATABASE_HOST'),
'port': config.get('DATABASE_PORT'),
'engine': config.get('DATABASE_ENGINE'),
};
let db = undefined;
if (config.get('DATABASE_ENGINE') == 'sqlite') {
db = new SqliteAdapter(dbConfig);
} else if (config.get('DATABASE_ENGINE') == 'postgres') {
db = new PostgresAdapter(dbConfig);
} else {
throw 'database engine ' + config.get('DATABASE_ENGINE') + 'not implemented';
}
let signer = undefined;
const keystore = new PGPKeyStore(config.get('PGP_PASSPHRASE'), pksa, pubksa, pubksa, pubksa, () => {
keysLoaded();
});
function keysLoaded() {
signer = new PGPSigner(keystore);
prepareServer();
}
async function migrateDatabase(cb) {
try {
const sql = "SELECT 1 FROM store;"
db.query(sql, (e, rs) => {
if (e === null || e === undefined) {
cb();
return;
}
console.warn('db check for table "store" fail', e);
console.debug('using schema path', config.get('DATABASE_SCHEMA_SQL_PATH'));
const sql = fs.readFileSync(config.get('DATABASE_SCHEMA_SQL_PATH'), 'utf-8');
db.query(sql, (e, rs) => {
if (e !== undefined && e !== null) {
console.error('db initialization fail', e);
return;
}
cb();
});
});
} catch(e) {
console.warn('table store does not exist', e);
}
}
async function prepareServer() {
await migrateDatabase(startServer);
}
async function startServer() {
http.createServer(processRequest).listen(config.get('SERVER_PORT'));
}
const re_digest = /^\/([a-fA-F0-9]{64})\/?$/;
function parseDigest(url) {
const digest_test = url.match(re_digest);
if (digest_test === null) {
throw 'invalid digest';
}
return digest_test[1].toLowerCase();
}
async function processRequest(req, res) {
let digest = undefined;
if (!['PUT', 'GET', 'POST'].includes(req.method)) {
res.writeHead(405, {"Content-Type": "text/plain"});
res.end();
return;
}
try {
digest = parseDigest(req.url);
} catch(e) {
res.writeHead(400, {"Content-Type": "text/plain"});
res.end();
return;
}
const mergeHeader = req.headers['x-cic-automerge'];
let mod = req.method.toLowerCase() + ":automerge:";
switch (mergeHeader) {
case "client":
mod += "client"; // client handles merges
break;
case "server":
mod += "server"; // server handles merges
break;
default:
mod += "none"; // merged object only (get only)
}
let data = '';
req.on('data', (d) => {
data += d;
});
req.on('end', async () => {
console.debug('mode', mod);
let content = '';
let contentType = 'application/json';
let r:any = undefined;
try {
switch (mod) {
case 'put:automerge:client':
r = await handlers.handleClientMergePut(data, db, digest, keystore, signer);
if (r == false) {
res.writeHead(403, {"Content-Type": "text/plain"});
res.end();
return;
}
break;
case 'get:automerge:client':
content = await handlers.handleClientMergeGet(db, digest, keystore);
break;
case 'post:automerge:server':
content = await handlers.handleServerMergePost(data, db, digest, keystore, signer);
break;
case 'put:automerge:server':
r = await handlers.handleServerMergePut(data, db, digest, keystore, signer);
if (r == false) {
res.writeHead(403, {"Content-Type": "text/plain"});
res.end();
return;
}
break;
//case 'get:automerge:server':
// content = await handlers.handleServerMergeGet(db, digest, keystore);
// break;
case 'get:automerge:none':
r = await handlers.handleNoMergeGet(db, digest, keystore);
if (r == false) {
res.writeHead(404, {"Content-Type": "text/plain"});
res.end();
return;
}
content = r;
break;
default:
res.writeHead(400, {"Content-Type": "text/plain"});
res.end();
return;
}
} catch(e) {
console.error('fail', mod, digest, e);
res.writeHead(500, {"Content-Type": "text/plain"});
res.end();
return;
}
if (content === undefined) {
res.writeHead(400, {"Content-Type": "text/plain"});
res.end();
return;
}
res.writeHead(200, {
"Content-Type": contentType,
"Content-Length": content.length,
});
res.write(content);
res.end();
});
}

View File

@@ -1,31 +0,0 @@
import { ArgPair, Syncable } from '../sync';
import { Addressable, addressToBytes, bytesToHex, toKey } from '../digest';
class Phone extends Syncable implements Addressable {
address: string
value: number
constructor(address:string, v:number) {
const o = {
msisdn: v,
}
super('', o);
Phone.toKey(v).then((phid) => {
this.id = phid;
this.address = address;
});
}
public static async toKey(msisdn:number) {
return await toKey(msisdn.toString(), ':cic.msisdn');
}
public key(): string {
return this.id;
}
}
export {
Phone,
}

View File

@@ -1,47 +0,0 @@
import { ArgPair, Syncable } from '../sync';
import { Addressable, addressToBytes, bytesToHex, toAddressKey } from '../digest';
const keySalt = new TextEncoder().encode(':cic.person');
class User extends Syncable implements Addressable {
address: string
firstName: string
lastName: string
constructor(address:string, v:Object={}) {
if (v['user'] === undefined) {
v['user'] = {
firstName: '',
lastName: '',
}
}
User.toKey(address).then((uid) => {
this.id = uid;
this.address = address;
});
super('', v);
}
public static async toKey(address:string) {
return await toAddressKey(address, ':cic.person');
}
public key(): string {
return this.id;
}
public setName(firstName:string, lastName:string) {
//const fn = new ArgPair('user.firstName', firstName)
//const ln = new ArgPair('user.lastName', lastName)
const n = {
'user': {
'firstName': firstName,
'lastName': lastName,
},
}
//this.update([fn, ln], 'update name');
this.replace(n, 'update name');
}
}
export { User };

View File

@@ -1,191 +0,0 @@
import * as pgp from 'openpgp';
import * as crypto from 'crypto';
interface Signable {
digest():string;
}
type KeyGetter = () => any;
type Signature = {
engine:string
algo:string
data:string
digest:string
}
interface Signer {
prepare(Signable):boolean;
onsign(Signature):void;
onverify(boolean):void;
sign(digest:string):void
verify(digest:string, signature:Signature):void
fingerprint():string
}
interface Authoritative {
}
interface KeyStore {
getPrivateKey: KeyGetter
getFingerprint: () => string
getTrustedKeys: () => Array<any>
getTrustedActiveKeys: () => Array<any>
getEncryptKeys: () => Array<any>
}
class PGPKeyStore implements KeyStore {
fingerprint: string
pk: any
pubk = {
active: [],
trusted: [],
encrypt: [],
}
loads = 0x00;
loadsTarget = 0x0f;
onload: (k:KeyStore) => void;
constructor(passphrase:string, pkArmor:string, pubkActiveArmor:string, pubkTrustedArmor:string, pubkEncryptArmor:string, onload = (ks:KeyStore) => {}) {
this._readKey(pkArmor, undefined, 1, passphrase);
this._readKey(pubkActiveArmor, 'active', 2);
this._readKey(pubkTrustedArmor, 'trusted', 4);
this._readKey(pubkEncryptArmor, 'encrypt', 8);
this.onload = onload;
}
private _readKey(a:string, x:any, n:number, pass?:string) {
pgp.key.readArmored(a).then((k) => {
if (pass !== undefined) {
this.pk = k.keys[0];
this.pk.decrypt(pass).then(() => {
this.fingerprint = this.pk.getFingerprint();
console.log('private key (sign)', this.fingerprint);
this._registerLoad(n);
});
} else {
this.pubk[x] = k.keys;
k.keys.forEach((pubk) => {
console.log('public key (' + x + ')', pubk.getFingerprint());
});
this._registerLoad(n);
}
});
}
private _registerLoad(b:number) {
this.loads |= b;
if (this.loads == this.loadsTarget) {
this.onload(this);
}
}
public getTrustedKeys(): Array<any> {
return this.pubk['trusted'];
}
public getTrustedActiveKeys(): Array<any> {
return this.pubk['active'];
}
public getEncryptKeys(): Array<any> {
return this.pubk['encrypt'];
}
public getPrivateKey(): any {
return this.pk;
}
public getFingerprint(): string {
return this.fingerprint;
}
}
class PGPSigner implements Signer {
engine = 'pgp'
algo = 'sha256'
dgst: string
signature: Signature
keyStore: KeyStore
onsign: (Signature) => void
onverify: (boolean) => void
constructor(keyStore:KeyStore) {
this.keyStore = keyStore
this.onsign = (string) => {};
this.onverify = (boolean) => {};
}
public fingerprint(): string {
return this.keyStore.getFingerprint();
}
public prepare(material:Signable):boolean {
this.dgst = material.digest();
return true;
}
public verify(digest:string, signature:Signature) {
pgp.signature.readArmored(signature.data).then((s) => {
const opts = {
message: pgp.cleartext.fromText(digest),
publicKeys: this.keyStore.getTrustedKeys(),
signature: s,
};
pgp.verify(opts).then((v) => {
let i = 0;
for (i = 0; i < v.signatures.length; i++) {
const s = v.signatures[i];
if (s.valid) {
this.onverify(s);
return;
}
}
console.error('checked ' + i + ' signature(s) but none valid');
this.onverify(false);
});
}).catch((e) => {
console.error(e);
this.onverify(false);
});
}
public sign(digest:string) {
const m = pgp.cleartext.fromText(digest);
const pk = this.keyStore.getPrivateKey();
const opts = {
message: m,
privateKeys: [pk],
detached: true,
}
pgp.sign(opts).then((s) => {
this.signature = {
engine: this.engine,
algo: this.algo,
data: s.signature,
// TODO: fix for browser later
digest: digest,
};
this.onsign(this.signature);
}).catch((e) => {
console.error(e);
this.onsign(undefined);
});
}
}
export {
Signature,
Authoritative,
Signer,
KeyGetter,
Signable,
KeyStore,
PGPSigner,
PGPKeyStore,
};

View File

@@ -1,71 +0,0 @@
import * as fs from 'fs';
import * as ini from 'ini';
import * as path from 'path';
class Config {
filepath: string
store: Object
censor: Array<string>
require: Array<string>
env_prefix: string
constructor(filepath:string, env_prefix?:string) {
this.filepath = filepath;
this.store = {};
this.censor = [];
this.require = [];
this.env_prefix = '';
if (env_prefix !== undefined) {
this.env_prefix = env_prefix + "_";
}
}
public process() {
const d = fs.readdirSync(this.filepath);
const r = /.*\.ini$/;
for (let i = 0; i < d.length; i++) {
const f = d[i];
if (!f.match(r)) {
return;
}
const fp = path.join(this.filepath, f);
const v = fs.readFileSync(fp, 'utf-8');
const inid = ini.decode(v);
const inik = Object.keys(inid);
for (let j = 0; j < inik.length; j++) {
const k_section = inik[j]
const k = k_section.toUpperCase();
Object.keys(inid[k_section]).forEach((k_directive) => {
const kk = k_directive.toUpperCase();
const kkk = k + '_' + kk;
let r = inid[k_section][k_directive];
const k_env = this.env_prefix + kkk
const env = process.env[k_env];
if (env !== undefined) {
console.debug('Environment variable ' + k_env + ' overrides ' + kkk);
r = env;
}
this.store[kkk] = r;
});
}
}
}
public get(s:string) {
return this.store[s];
}
public toString() {
let s = '';
Object.keys(this.store).forEach((k) => {
s += k + '=' + this.store[k] + '\n';
});
return s;
}
}
export { Config };

View File

@@ -1,38 +0,0 @@
import { JSONSerializable } from './format';
const ENGINE_NAME = 'automerge';
const ENGINE_VERSION = '0.14.1';
const NETWORK_NAME = 'cic';
const NETWORK_VERSION = '1';
const CRYPTO_NAME = 'pgp';
const CRYPTO_VERSION = '2';
type VersionedSpec = {
name: string
version: string
ext?: Object
}
const engineSpec:VersionedSpec = {
name: ENGINE_NAME,
version: ENGINE_VERSION,
}
const cryptoSpec:VersionedSpec = {
name: CRYPTO_NAME,
version: CRYPTO_VERSION,
}
const networkSpec:VersionedSpec = {
name: NETWORK_NAME,
version: NETWORK_VERSION,
}
export {
engineSpec,
cryptoSpec,
networkSpec,
VersionedSpec,
};

View File

@@ -1,27 +0,0 @@
import * as crypto from 'crypto';
const _algs = {
'SHA-256': 'sha256',
}
function cryptoWrapper() {
}
cryptoWrapper.prototype.digest = async function(s, d) {
const h = crypto.createHash(_algs[s]);
h.update(d);
return h.digest();
}
let subtle = undefined;
if (typeof window !== 'undefined') {
subtle = window.crypto.subtle;
} else {
subtle = new cryptoWrapper();
}
export {
subtle,
}

View File

@@ -1,90 +0,0 @@
import * as pg from 'pg';
import * as sqlite from 'sqlite3';
type DbConfig = {
name: string
host: string
port: number
user: string
password: string
}
interface DbAdapter {
query: (s:string, callback:(e:any, rs:any) => void) => void
close: () => void
}
const re_creatematch = /^(CREATE)/i
const re_getmatch = /^(SELECT)/i;
const re_setmatch = /^(INSERT|UPDATE)/i;
class SqliteAdapter implements DbAdapter {
db: any
constructor(dbConfig:DbConfig, callback?:(any) => void) {
this.db = new sqlite.Database(dbConfig.name); //, callback);
}
public query(s:string, callback:(e:any, rs?:any) => void): void {
const local_callback = (e, rs) => {
let r = undefined;
if (rs !== undefined) {
r = {
rowCount: rs.length,
rows: rs,
}
}
callback(e, r);
};
if (s.match(re_getmatch)) {
this.db.all(s, local_callback);
} else if (s.match(re_setmatch)) {
this.db.run(s, local_callback);
} else if (s.match(re_creatematch)) {
this.db.run(s, callback);
} else {
throw 'unhandled query';
}
}
public close() {
this.db.close();
}
}
class PostgresAdapter implements DbAdapter {
db: any
constructor(dbConfig:DbConfig) {
let o = dbConfig;
o['database'] = o.name;
this.db = new pg.Pool(o);
return this.db;
}
public query(s:string, callback:(e:any, rs:any) => void): void {
this.db.query(s, (e, rs) => {
let r = {
length: rs.rowCount,
}
rs.length = rs.rowCount;
if (e === undefined) {
e = null;
}
console.debug(e, rs);
callback(e, rs);
});
}
public close() {
this.db.end();
}
}
export {
DbConfig,
SqliteAdapter,
PostgresAdapter,
}

View File

@@ -1,67 +0,0 @@
import * as crypto from './crypto';
interface Addressable {
key(): string
digest(): string
}
function stringToBytes(s:string) {
const a = new Uint8Array(20);
let j = 2;
for (let i = 0; i < a.byteLength; i++) {
const n = parseInt(s.substring(j, j+2), 16);
a[i] = n;
j += 2;
}
return a;
}
function bytesToHex(a:Uint8Array) {
let s = '';
for (let i = 0; i < a.byteLength; i++) {
const h = '00' + a[i].toString(16);
s += h.slice(-2);
}
return s;
}
async function mergeKey(a:Uint8Array, s:Uint8Array) {
const y = new Uint8Array(a.byteLength + s.byteLength);
for (let i = 0; i < a.byteLength; i++) {
y[i] = a[i];
}
for (let i = 0; i < s.byteLength; i++) {
y[a.byteLength + i] = s[i];
}
const z = await crypto.subtle.digest('SHA-256', y);
return bytesToHex(new Uint8Array(z));
}
async function toKey(v:string, salt:string) {
const a = stringToBytes(v);
const s = new TextEncoder().encode(salt);
return await mergeKey(a, s);
}
async function toAddressKey(zeroExHex:string, salt:string) {
const a = addressToBytes(zeroExHex);
const s = new TextEncoder().encode(salt);
return await mergeKey(a, s);
}
const re_addrHex = /^0[xX][a-fA-F0-9]{40}$/;
function addressToBytes(s:string) {
if (!s.match(re_addrHex)) {
throw 'invalid address hex';
}
return stringToBytes(s);
}
export {
toKey,
toAddressKey,
bytesToHex,
addressToBytes,
Addressable,
}

View File

@@ -1,58 +0,0 @@
import { v4 as uuidv4 } from 'uuid';
import { Syncable } from './sync';
import { Store } from './store';
import { PubSub } from './transport';
function toIndexKey(id:string):string {
const d = Date.now();
return d + '_' + id + '_' + uuidv4();
}
const _re_indexKey = /^\d+_(.+)_[-\d\w]+$/;
function fromIndexKey(s:string):string {
const m = s.match(_re_indexKey);
if (m === null) {
throw 'Invalid index key';
}
return m[1];
}
class Dispatcher {
idx: Array<string>
syncer: PubSub
store: Store
constructor(store:Store, syncer:PubSub) {
this.idx = new Array<string>()
this.syncer = syncer;
this.store = store;
}
public isDirty(): boolean {
return this.idx.length > 0;
}
public add(id:string, item:Syncable): string {
const v = item.toJSON();
const k = toIndexKey(id);
this.store.put(k, v, true);
localStorage.setItem(k, v);
this.idx.push(k);
return k;
}
public sync(offset:number): number {
let i = 0;
this.idx.forEach((k) => {
const v = localStorage.getItem(k);
const k_id = fromIndexKey(k);
this.syncer.pub(v); // this must block until guaranteed delivery
localStorage.removeItem(k);
i++;
});
return i;
}
}
export { Dispatcher, toIndexKey, fromIndexKey }

View File

@@ -1,5 +0,0 @@
interface JSONSerializable {
toJSON(): string
}
export { JSONSerializable };

View File

@@ -1,4 +0,0 @@
export { PGPSigner, PGPKeyStore } from './auth';
export { Envelope, Syncable } from './sync';
export { User } from './assets/user';
export { Phone } from './assets/phone';

View File

@@ -1,9 +0,0 @@
import { Syncable } from './sync';
interface Store {
put(string, Syncable, boolean?)
get(string):Syncable
delete(string)
}
export { Store };

View File

@@ -1,266 +0,0 @@
import * as Automerge from 'automerge';
import { JSONSerializable } from './format';
import { Authoritative, Signer, PGPSigner, Signable, Signature } from './auth';
import { engineSpec, cryptoSpec, networkSpec, VersionedSpec } from './constants';
const fullSpec:VersionedSpec = {
name: 'cic',
version: '1',
ext: {
network: cryptoSpec,
engine: engineSpec,
},
}
class Envelope {
o = fullSpec
constructor(payload:Object) {
this.set(payload);
}
public set(payload:Object) {
this.o['payload'] = payload
}
public get():string {
return this.o['payload'];
}
public toJSON() {
return JSON.stringify(this.o);
}
public static fromJSON(s:string): Envelope {
const e = new Envelope(undefined);
e.o = JSON.parse(s);
return e;
}
public unwrap(): Syncable {
return Syncable.fromJSON(this.o['payload']);
}
}
class ArgPair {
k:string
v:any
constructor(k:string, v:any) {
this.k = k;
this.v = v;
}
}
class SignablePart implements Signable {
s: string
constructor(s:string) {
this.s = s;
}
public digest():string {
return this.s;
}
}
function orderDict(src) {
let dst;
if (Array.isArray(src)) {
dst = [];
src.forEach((v) => {
if (typeof(v) == 'object') {
v = orderDict(v);
}
dst.push(v);
});
} else {
dst = {}
Object.keys(src).sort().forEach((k) => {
let v = src[k];
if (typeof(v) == 'object') {
v = orderDict(v);
}
dst[k] = v;
});
}
return dst;
}
class Syncable implements JSONSerializable, Authoritative {
id: string
timestamp: number
m: any // automerge object
e: Envelope
signer: Signer
onwrap: (string) => void
onauthenticate: (boolean) => void
// TODO: Move data to sub-object so timestamp, id, signature don't collide
constructor(id:string, v:Object) {
this.id = id;
const o = {
'id': id,
'timestamp': Math.floor(Date.now() / 1000),
'data': v,
}
//this.m = Automerge.from(v)
this.m = Automerge.from(o)
}
public setSigner(signer:Signer) {
this.signer = signer;
this.signer.onsign = (s) => {
this.wrap(s);
};
}
// TODO: To keep integrity, the non-link key/value pairs for each step also need to be hashed
public digest(): string {
const links = [];
Automerge.getAllChanges(this.m).forEach((ch:Object) => {
const op:Array<any> = ch['ops'];
ch['ops'].forEach((op:Array<Object>) => {
if (op['action'] == 'link') {
//console.log('op link', op);
links.push([op['obj'], op['value']]);
}
});
});
//return JSON.stringify(links);
const j = JSON.stringify(links);
return Buffer.from(j).toString('base64');
}
private wrap(s:any) {
this.m = Automerge.change(this.m, 'sign', (doc) => {
doc['signature'] = s;
});
this.e = new Envelope(this.toJSON());
console.log('wrappin s', s, typeof(s));
this.e.o['digest'] = s.digest;
if (this.onwrap !== undefined) {
this.onwrap(this.e);
}
}
// private _verifyLoop(i:number, history:Array<any>, signable:Signable, result:boolean) {
// if (!result) {
// this.onauthenticate(false);
// return;
// } else if (history.length == 0) {
// this.onauthenticate(true);
// return;
// }
// const h = history.shift()
// if (i % 2 == 0) {
// i++;
// signable = {
// digest: () => {
// return Automerge.save(h.snapshot)
// },
// };
// this._verifyLoop(i, history, signable, true);
// } else {
// i++;
// const signature = h.snapshot['signature'];
// console.debug('signature', signature, signable.digest());
// this.signer.onverify = (v) => {
// this._verifyLoop(i, history, signable, v)
// }
// this.signer.verify(signable, signature);
// }
// }
//
// // TODO: This should replay the graph and check signatures on each step
// public _authenticate(full:boolean=false) {
// let h = Automerge.getHistory(this.m);
// h.forEach((m) => {
// //console.debug(m.snapshot);
// });
// const signable = {
// digest: () => { return '' },
// }
// if (!full) {
// h = h.slice(h.length-2);
// }
// this._verifyLoop(0, h, signable, true);
// }
public authenticate(full:boolean=false) {
if (full) {
console.warn('only doing shallow authentication for now, sorry');
}
//console.log('authenticating', signable.digest());
//console.log('signature', this.m.signature);
this.signer.onverify = (v) => {
//this._verifyLoop(i, history, signable, v)
this.onauthenticate(v);
}
this.signer.verify(this.m.signature.digest, this.m.signature);
}
public sign() {
//this.signer.prepare(this);
this.signer.sign(this.digest());
}
public update(changes:Array<ArgPair>, changesDescription:string) {
this.m = Automerge.change(this.m, changesDescription, (m) => {
changes.forEach((c) => {
let path = c.k.split('.');
let target = m['data'];
while (path.length > 1) {
const part = path.shift();
target = target[part];
}
target[path[0]] = c.v;
});
m['timestamp'] = Math.floor(Date.now() / 1000);
});
}
public replace(o:Object, changesDescription:string) {
this.m = Automerge.change(this.m, changesDescription, (m) => {
Object.keys(o).forEach((k) => {
m['data'][k] = o[k];
});
Object.keys(m).forEach((k) => {
if (o[k] == undefined) {
delete m['data'][k];
}
});
m['timestamp'] = Math.floor(Date.now() / 1000);
});
}
public merge(s:Syncable) {
this.m = Automerge.merge(s.m, this.m);
}
public toJSON(): string {
const s = Automerge.save(this.m);
const o = JSON.parse(s);
const oo = orderDict(o)
return JSON.stringify(oo);
}
public static fromJSON(s:string): Syncable {
const doc = Automerge.load(s);
let y = new Syncable(doc['id'], {});
y.m = doc
return y
}
}
export { JSONSerializable, Syncable, ArgPair, Envelope };

View File

@@ -1,11 +0,0 @@
interface SubConsumer {
post(string)
}
interface PubSub {
pub(v:string):boolean
close()
}
export { PubSub, SubConsumer };

View File

@@ -1,50 +0,0 @@
import * as Automerge from 'automerge';
import assert = require('assert');
import { Dispatcher, toIndexKey, fromIndexKey } from '../src/dispatch';
import { User } from '../src/assets/user';
import { Syncable, ArgPair } from '../src/sync';
import { MockSigner, MockStore } from './mock';
describe('basic', () => {
it('store', () => {
const store = new MockStore('s');
assert.equal(store.name, 's');
const mockSigner = new MockSigner();
const v = new Syncable('foo', {baz: 42});
v.setSigner(mockSigner);
store.put('foo', v);
const one = store.get('foo').toJSON();
const vv = new Syncable('bar', {baz: 666});
vv.setSigner(mockSigner);
assert.throws(() => {
store.put('foo', vv)
});
store.put('foo', vv, true);
const other = store.get('foo').toJSON();
assert.notEqual(one, other);
store.delete('foo');
assert.equal(store.get('foo'), undefined);
});
it('add_doc_to_dispatcher', () => {
const store = new MockStore('s');
//const syncer = new MockSyncer();
const dispatcher = new Dispatcher(store, undefined);
const user = new User('foo');
dispatcher.add(user.id, user);
assert(dispatcher.isDirty());
});
it('dispatch_keyindex', () => {
const s = 'foo';
const k = toIndexKey(s);
const v = fromIndexKey(k);
assert.equal(s, v);
});
});

View File

@@ -1,212 +0,0 @@
import * as Automerge from 'automerge';
import assert = require('assert');
import * as pgp from 'openpgp';
import * as fs from 'fs';
import { PGPSigner } from '../src/auth';
import { Syncable, ArgPair } from '../src/sync';
import { MockKeyStore, MockSigner } from './mock';
describe('sync', async () => {
it('sync_merge', () => {
const mockSigner = new MockSigner();
const s = new Syncable('foo', {
bar: 'baz',
});
s.setSigner(mockSigner);
const changePair = new ArgPair('xyzzy', 42);
s.update([changePair], 'ch-ch-cha-changes');
assert.equal(s.m.data['xyzzy'], 42)
assert.equal(s.m.data['bar'], 'baz')
assert.equal(s.m['id'], 'foo')
assert.equal(Automerge.getHistory(s.m).length, 2);
});
it('sync_serialize', () => {
const mockSigner = new MockSigner();
const s = new Syncable('foo', {
bar: 'baz',
});
s.setSigner(mockSigner);
const j = s.toJSON();
const ss = Syncable.fromJSON(j);
assert.equal(ss.m['id'], 'foo');
assert.equal(ss.m['data']['bar'], 'baz');
assert.equal(Automerge.getHistory(ss.m).length, 1);
});
it('sync_sign_and_wrap', () => {
const mockSigner = new MockSigner();
const s = new Syncable('foo', {
bar: 'baz',
});
s.setSigner(mockSigner);
s.onwrap = (e) => {
const j = e.toJSON();
const v = JSON.parse(j);
assert.deepEqual(v.payload, e.o.payload);
}
s.sign();
});
it('sync_verify_success', async () => {
const pksa = fs.readFileSync(__dirname + '/privatekeys.asc');
const pks = await pgp.key.readArmored(pksa);
await pks.keys[0].decrypt('merman');
await pks.keys[1].decrypt('beastman');
const pubksa = fs.readFileSync(__dirname + '/publickeys.asc');
const pubks = await pgp.key.readArmored(pubksa);
const oneStore = new MockKeyStore(pks.keys[0], pubks.keys);
const twoStore = new MockKeyStore(pks.keys[1], pubks.keys);
const threeStore = new MockKeyStore(pks.keys[2], [pubks.keys[0], pubks.keys[2]]);
const oneSigner = new PGPSigner(oneStore);
const twoSigner = new PGPSigner(twoStore);
const threeSigner = new PGPSigner(threeStore);
const x = new Syncable('foo', {
bar: 'baz',
});
x.setSigner(oneSigner);
// TODO: make this look better
x.onwrap = (e) => {
let updateData = new ArgPair('bar', 'xyzzy');
x.update([updateData], 'change one');
x.onwrap = (e) => {
x.setSigner(twoSigner);
updateData = new ArgPair('bar', 42);
x.update([updateData], 'change two');
x.onwrap = (e) => {
const p = e.unwrap();
p.setSigner(twoSigner);
p.onauthenticate = (v) => {
assert(v);
}
p.authenticate();
}
x.sign();
};
x.sign();
}
x.sign();
});
it('sync_verify_fail', async () => {
const pksa = fs.readFileSync(__dirname + '/privatekeys.asc');
const pks = await pgp.key.readArmored(pksa);
await pks.keys[0].decrypt('merman');
await pks.keys[1].decrypt('beastman');
const pubksa = fs.readFileSync(__dirname + '/publickeys.asc');
const pubks = await pgp.key.readArmored(pubksa);
const oneStore = new MockKeyStore(pks.keys[0], pubks.keys);
const twoStore = new MockKeyStore(pks.keys[1], pubks.keys);
const threeStore = new MockKeyStore(pks.keys[2], [pubks.keys[0], pubks.keys[2]]);
const oneSigner = new PGPSigner(oneStore);
const twoSigner = new PGPSigner(twoStore);
const threeSigner = new PGPSigner(threeStore);
const x = new Syncable('foo', {
bar: 'baz',
});
x.setSigner(oneSigner);
// TODO: make this look better
x.onwrap = (e) => {
let updateData = new ArgPair('bar', 'xyzzy');
x.update([updateData], 'change one');
x.onwrap = (e) => {
x.setSigner(twoSigner);
updateData = new ArgPair('bar', 42);
x.update([updateData], 'change two');
x.onwrap = (e) => {
const p = e.unwrap();
p.setSigner(threeSigner);
p.onauthenticate = (v) => {
assert(!v);
}
p.authenticate();
}
x.sign();
};
x.sign();
}
x.sign();
});
xit('sync_verify_shallow_tricked', async () => {
const pksa = fs.readFileSync(__dirname + '/privatekeys.asc');
const pks = await pgp.key.readArmored(pksa);
await pks.keys[0].decrypt('merman');
await pks.keys[1].decrypt('beastman');
const pubksa = fs.readFileSync(__dirname + '/publickeys.asc');
const pubks = await pgp.key.readArmored(pubksa);
const oneStore = new MockKeyStore(pks.keys[0], pubks.keys);
const twoStore = new MockKeyStore(pks.keys[1], pubks.keys);
const threeStore = new MockKeyStore(pks.keys[2], [pubks.keys[0], pubks.keys[2]]);
const oneSigner = new PGPSigner(oneStore);
const twoSigner = new PGPSigner(twoStore);
const threeSigner = new PGPSigner(threeStore);
const x = new Syncable('foo', {
bar: 'baz',
});
x.setSigner(twoSigner);
// TODO: make this look better
x.onwrap = (e) => {
let updateData = new ArgPair('bar', 'xyzzy');
x.update([updateData], 'change one');
x.onwrap = (e) => {
updateData = new ArgPair('bar', 42);
x.update([updateData], 'change two');
x.setSigner(oneSigner);
x.onwrap = (e) => {
const p = e.unwrap();
p.setSigner(threeSigner);
p.onauthenticate = (v) => {
assert(v);
p.onauthenticate = (v) => {
assert(!v);
}
p.authenticate(true);
}
p.authenticate();
}
x.sign();
};
x.sign();
}
x.sign();
});
});

View File

@@ -1,14 +0,0 @@
import * as assert from 'assert';
import { MockPubSub, MockConsumer } from './mock';
describe('transport', () => {
it('pub_sub', () => {
const c = new MockConsumer();
const ps = new MockPubSub('foo', c);
ps.pub('foo');
ps.pub('bar');
ps.flush();
assert.deepEqual(c.omnoms, ['foo', 'bar']);
});
});

View File

@@ -1,46 +0,0 @@
import assert = require('assert');
import pgp = require('openpgp');
import crypto = require('crypto');
import { Syncable, ArgPair } from '../src/sync';
import { MockKeyStore, MockSignable } from './mock';
import { PGPSigner } from '../src/auth';
describe('auth', async () => {
await it('digest', async () => {
const opts = {
userIds: [
{
name: 'John Marston',
email: 'red@dead.com',
},
],
numBits: 2048,
passphrase: 'foo',
};
const pkgen = await pgp.generateKey(opts);
const pka = pkgen.privateKeyArmored;
const pks = await pgp.key.readArmored(pka);
await pks.keys[0].decrypt('foo');
const pubka = pkgen.publicKeyArmored;
const pubks = await pgp.key.readArmored(pubka);
const keyStore = new MockKeyStore(pks.keys[0], pubks.keys);
const s = new PGPSigner(keyStore);
const message = await pgp.cleartext.fromText('foo');
s.onverify = (ok) => {
assert(ok);
}
s.onsign = (signature) => {
s.onverify((v) => {
console.log('bar', v);
});
s.verify('foo', signature);
}
await s.sign('foo');
});
});

View File

@@ -1,47 +0,0 @@
import * as assert from 'assert';
import * as pgp from 'openpgp';
import { Dispatcher } from '../src/dispatch';
import { User } from '../src/assets/user';
import { PGPSigner, KeyStore } from '../src/auth';
import { SubConsumer } from '../src/transport';
import { MockStore, MockPubSub, MockConsumer, MockKeyStore } from './mock';
async function createKeyStore() {
const opts = {
userIds: [
{
name: 'John Marston',
email: 'red@dead.com',
},
],
numBits: 2048,
passphrase: 'foo',
};
const pkgen = await pgp.generateKey(opts);
const pka = pkgen.privateKeyArmored;
const pks = await pgp.key.readArmored(pka);
await pks.keys[0].decrypt('foo');
return new MockKeyStore(pks.keys[0], []);
}
describe('fullchain', async () => {
it('dispatch_and_publish_user', async () => {
const g = await createKeyStore();
const n = new PGPSigner(g);
const u = new User('u1', {});
u.setSigner(n);
u.setName('Nico', 'Bellic');
const s = new MockStore('fooStore');
const c = new MockConsumer();
const p = new MockPubSub('fooPubSub', c);
const d = new Dispatcher(s, p);
u.onwrap = (e) => {
d.add(u.id, e);
d.sync(0);
assert.equal(p.pubs.length, 1);
};
u.sign();
});
});

View File

@@ -1,150 +0,0 @@
import * as crypto from 'crypto';
import { Signable, Signature, KeyStore } from '../src/auth';
import { Store } from '../src/store';
import { PubSub, SubConsumer } from '../src/transport';
import { Syncable } from '../src/sync';
class MockStore implements Store {
contents: Object
name: string
constructor(name:string) {
this.name = name;
this.contents = {};
}
public put(k:string, v:Syncable, existsOk = false) {
if (!existsOk && this.contents[k] !== undefined) {
throw '"' + k + '" already exists in store ' + this.name;
} 
this.contents[k] = v;
}
public get(k:string): Syncable {
return this.contents[k];
}
public delete(k:string) {
delete this.contents[k];
}
}
class MockSigner {
onsign: (string) => void
onverify: (boolean) => void
public verify(src:string, signature:Signature) {
return true;
}
public sign(s:string):boolean {
this.onsign('there would be a signature here');
return true;
}
public prepare(m:Signable):boolean {
return true;
}
public fingerprint():string {
return '';
}
}
class MockConsumer implements SubConsumer {
omnoms: Array<string>
constructor() {
this.omnoms = Array<string>();
}
public post(v:string) {
this.omnoms.push(v);
}
}
class MockPubSub implements PubSub {
pubs: Array<string>
consumer: SubConsumer
constructor(name:string, consumer:SubConsumer) {
this.pubs = Array<string>();
this.consumer = consumer;
}
public pub(v:string): boolean {
this.pubs.push(v);
return true;
}
public flush() {
while (this.pubs.length > 0) {
const s = this.pubs.shift();
this.consumer.post(s);
}
}
public close() {
}
}
class MockSignable implements Signable {
src: string
dst: string
constructor(src:string) {
this.src = src;
}
public digest():string {
const h = crypto.createHash('sha256');
h.update(this.src);
this.dst= h.digest('hex');
return this.dst;
}
}
class MockKeyStore implements KeyStore {
pk: any
pubks: Array<any>
constructor(pk:any, pubks:Array<any>) {
this.pk = pk;
this.pubks = pubks;
}
public getPrivateKey(): any {
return this.pk;
}
public getTrustedKeys(): Array<any> {
return this.pubks;
}
public getTrustedActiveKeys(): Array<any> {
return [];
}
public getEncryptKeys(): Array<any> {
return [];
}
public getFingerprint(): string {
return '';
}
}
export {
MockStore,
MockPubSub,
MockConsumer,
MockSignable,
MockKeyStore,
MockSigner,
};

View File

@@ -1,241 +0,0 @@
-----BEGIN PGP PRIVATE KEY BLOCK-----
lQWGBF+hSOgBDACpkPQEjADjnQtjmAsdPYpx5N+OMJBYj1DAoIYsDtV6vbcBJQt9
4Om3xl7RBhv9m2oLgzPsiRwjCEFRWyNSu0BUp5CFjcXfm0S4K2egx4erFnTnSSC9
S6tmVNrVNEXvScE6sKAnmJ7JNX1ExJuEiWPbUDRWJ1hoI9+AR+8EONeJRLo/j0Np
+S4IFDn0PsxdT+SB0GY0z2cEgjvjoPr4lW9IAb8Ft9TDYp+mOzejn1Fg7CuIrlBR
SAv+sj7bVQw15dh1SpbwtS5xxubCa8ExEGI4ByXmeXdR0KZJ+EA5ksO0iSsQ/6ip
SOdSg+i0niOClFNm1P/OhbUsYAxCUfiX654FMn2zoxVBEjJ3e7l0pH7ktodaxEct
PofQLBA9LSDUIejqJsU0npw/DHDD2uvxG+/A6lgV9L8ETlvgp8RzeOCf2bHuiKYY
z87txvkFwsXgU1+TZxbk+mtCBbngsVPLNarY/KGkVJL+yhcHRD0Pl4wXUd6auQuY
6vQ9AuKiCT1We2sAEQEAAf4HAwK2fexgxtQ8CfgdeIlzdeY9K+HZL18brETddoya
3BeC1MSH7gxXqtCVQ5qdBk27J4wlGl0H83kYSCeVQs6hmrSrv8JCErguIdpZIJ/D
kcjGlGrOELfnXeif0VfUZN3LWxJZizCIS8I9F8VKD9c57nZEcbWcKTLizV0j1BeT
sdrumt/3UDhpCJTj1q3biRsiUbpmX+jPlRWN3OeSZJaRRyy4FnzTs3bndBYmkOsk
ZNKRk7jRNEU/LItbABStuP2zDrZsampVntKcNRXBVE2170t4T/Q4Gc0ckz4ohprY
lGykE2DdwapCdcKWccVXhM+svDwoLf+g4kjKuCE7R11v6rZlRxYrfquZXwtUx0DB
17x+JqyBaacyWm3Vq65DcNyiQw2NqCPdJU+iZoOGaermKIz3BqwxY+WE0HyjxQkH
P5KUpKQTmsTIlwHWFOVDYMRUUvD7P7XiElOBECDb3bJL9+z2SHZWTE6OaZKnmBFf
ZTdXgtGe/Ctx97PgWZOwM500Q45QC+D59NCYtXRtqi9WCLGsZpQbSZmojIUOJRuD
s5un+8lA3T0BhlJS4DC9CgN8Lxs4kT/XV/LYiXU4Z8MWEahurEbpDwH6YNzGktUR
zuE9HOe0fesdrOV0Sk2aol5CCRj3vTcsROTFUcT6UYPq28vy0U3zUJVvyNa0swk7
PUiB+xhCi24Z9dy+0F1q1L20tJ/YCjC/tyLI36Rkl85PnoviwOOOll0+/claf8BV
e9x43voYe0o8Q7ttU0aFxVH/lGaTRyVMcXJFw0EPLuwIrcGrcauatcbO7lI2nVww
kBZFepWh7JBRl2x5SXnvTqLnWp2D5w1viUPcBN5xAj9IKOWrRr2kIRLiOVIGh9ta
Hiio2+vg/ZmhsmMzA36xYkH6NvyjNAeLUgTVfEAtsCrRXdW8FYTTGOKDmw55Ma+P
Ej1QWWzbwqPU+h+AOyklVZ1xGncxTkyad5niXYEzBJbbA01QoAtZeY7kSg0ae6uD
YPRQGf+0G6YlCKPOZjBH8AvbedhyjIKZhBT8M2sHIKSESPP0Vs8yS16rYzy8o6+e
7uYsIST+PMWXxDpJHmN2Ks5uo789+TiHfffHzbsTuevNIwk9FbMA6gpDdtMCaFZX
abZxz6sxLv9MoWjIKR2vDZKHjK5DVlJv4V1De3gTsCmfQhhToPzNGGFEI00aBki6
IJIyisOuZtQiXhHy1vN499evLDwkc8u1S6ex6Q7blp75IQmJJ4/WG+XA55D+Mfnd
QSbV+zP9WQu66RR+RDsx+c7L7Bg58bqXE3bPcoLzaHOmDwpw74BGmNu84dfmyKbI
FocSAWP+Oe3sBxcdE7aVS+FB+B30It25LbQeTWVyIE1hbiA8bWVybWFuQGdyZXlz
a3VsbC5jb20+iQHUBBMBCAA+FiEE8/r2aOgu9RJNUYe67yb0aCND9pIFAl+hSOgC
GwMFCQPCZwAFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQ7yb0aCND9pLwiwwA
hFJbAyUK05TJKfDz81757N472STtB8sfr0auwmRr8Zs1utHRVM0b/jkjTuo4uJNr
7YVVKTKgE7+rJ+pwhm3wlTQ44LVLjByWAi/7NWg3E9b2elm+qkfgm/RfFt3vkuOx
GSyZyIFFh+/twv6iABPvr6w7MZwrFaS0UP3g1VGa5TFqg6KNxod9H/gPLxv45lut
Xf3VvBZTJpr1pxn7aLHlFzEyIgNZbP/N1QF44GSrN/k0DfL631sZjauUXaZXbi5x
GsKKCYwJ1g3q587pi6mTdTV3n0hKgVuipO8hGy5++YeOv+hXsCxDwyZ+Shv+qavd
/SapxYgCdEueuwONIFfsIsWCd3SCcjKXicTTEFMu8nvBmf7xuo2hv6vEOxoijlXV
+4LkGrskdB8ZMg8PywEx6DLmDokgnAhTLrTc1ShbkOtQ3yNjjyFK7BDpqobsJal6
d8SpbhccUJLepaSmsk0CgJsTjhAl6EwX0EYgTo3kP5fScqrbD8VwQaT8CcE4rCV4
nQWGBF+hSOgBDADHtpTT1k4x+6FN5OeURpKAaIsoPHghkJ2lb6yWmESCa+DaR6GX
AKlbd0L9UMcXLqnaCn4SpZvbf8hP4fJRgWdRl5uVN/rmyVbZLUVjM8NcVdFRIrTs
Nyu4mLBmydc3iA/90sCTEOj9e7DSvxLmmLFjpwM5xXLd6z0l6+9G+woNmARXVS3V
/RryFntyKC3ATCqVlJoQBG45Tj2gMIunpadTJXWmdioooeGW3sLeUv5MM98mSB4S
jKRlJqGPNjx5lO6MmJbZeXZ/L/aO6EsXUQD2h82Wphll4rpGYWPiHTCYqZYiqNYr
6E3xUpzcvWVp3uCYVJWP6Ds117p7BoyKVz00yxC9ledF3eppktZWqFVowCMihQE3
676L3DDTZsnJf1/8xKUh5U2Mj3lBvjlvCECKi00qo8b1mn/OklQjJ5T4WzTrH6X+
/zpez8ZkmtcOayHdUKD/64roZ9dXbXG/hp5A+UWj8oSVYKg2QNAwAnZ+aiZ2KVRE
/Y61DCgFg6Ccx/cAEQEAAf4HAwLvYCWT4e84+PjE5pF2+FQAEMmVwTUm5pv9XhBd
Lnw68o0N/OGhi8LLMuhiI22u60W+//6Pknws1FfHI6zVeHZ1V4DcE8JtJcbSqGk4
X1IFSXB60kduyCDLxq7PgqlLac2vr8jOsZAGTM8okJ3jrCrXd0oEPMIPQzo4RKZJ
PeBwUyzTU1+jA5pZjpj+DgpBoC5uZTeGLB2ftbN/w3wBUsZZR3q7WiM7p34+xvST
Obe1u5PerN5BH6zizvCWr2yRGF0RdUYz6q0kQdUorDjqrowYlNi5Em3RIyK1IoFR
MpcZPf9zMODMPZ2VlBruDQu40thr/Ho/5w15QmJ/7SmstGreKerI2jUziHPa4XMo
pUS+jGpIC3pZRa2Y+4UpgtYciuc5CusxzAOYbSh+py1kLuL/tkI54QsLYG2gDcd5
dGz/jxun4irlZ/Iy1GtGM5+SrREktwRD2lIou295XqWOHwJPahPG7xb172VeUfoK
AObWonSJ9uWcsG/FKNo1at9ENA1x+zUV6s+F8B78snQJ96iFIHtz+5NAXQR0pEnD
i7DIHSSGaeZdj2NcbmM6t5/dyN40KHwymYxrItHGL19uRUJiJfgGeI9+dNCRfMOU
4YK+/kiGqH4Yr4WNBmF8zeP51gWDCspCzMKp+Z3wtGXx7j+147iWqW/6ARZ5krJa
oWF+gmesFYFWz54Lr/IuA4usaRSbt+ZnXpJTQip74NOrKF7JpXeVMWY7BN5wcnyO
SXrJrg3xKupq/oZlHnpGiL/UGrr9NZmT/ajg1xjVArkWD0YkwnTRP+CBXLNyrhtd
eLzClaDiv8wXMIm1uWImX7zVv+H7ngfU2aQOMQiU1BbV+pU69bAVdD73glniID1R
HYJHFhOxyF9nFTfBkPM/3rNuJDURLyMhkIyZ3OhIOiDv+5W2Q1swhlfLI5Tf7eCv
wxMGBM508I7TuemCuUk0oqsDnm1Z+oCEWqEI06qvMpGPPO9HU90kELdGDVlnVo6J
wP9UOgXa9LsywaFO+otV/spEpntQXXmHgzLgESyCxe0iHSSv9GxBLk1lTTCgi2qW
B8KI60TJiK3+jTiBR422XMQs5mkvDqOBLuX4dpOuosewPwAEfrl9ZF6z1f2TVVwk
piuHzNcz0NaLWkIrfDb2wIEPEzdCU+pVSfrh3g4S8dMiAK0IMWTYvye5xZ1bd9tN
vwI7ottJiJDk97ScnBU6b//Pb8QbQjjtXbssrfkBaJH/e0cE2WGkUzIQd6sJ8qnq
7mofMB7zU9iD0C3B5BCSnh36vKtGecosrpUmRNfGm79DattdQqAzZSY8rBHvJ+22
KWF1VcqZVxYk5B33jc0p7tXjix2xyMc9IYkBvAQYAQgAJhYhBPP69mjoLvUSTVGH
uu8m9GgjQ/aSBQJfoUjoAhsMBQkDwmcAAAoJEO8m9GgjQ/aSIPcL/3jqL2A2SmC+
s0BO4vMPEfCpa2gZ/vo1azzjUieZu5WhIxb5ik0V6T75EW5F0OeZj9qXI06gW+IM
8+C6ImUgaR3l47UjBiBPq+uKO9QuT/nOtbSs2dXoTNCLMQN7MlrdUBix+lnqZZGS
Dgh6n/uVyAYw8Sh4c3/3thHUiR7xzVKGxAKDT8LoVjhHshTzYuQq8MqlfvwVI4eE
SLaryQ+Y+j5+VLDzSLgPAnnIqF/ui2JQjefJxm/VLoYNaPAGdqoz/u/R0Tmz94bZ
UfLjgQaDoUpnxYywK2JGlf3mPZ3PNWjxJzuQTF5Ge5bz/TylnRYIyBT7KD7oaKHO
62fhDbYPJ4f94iZN4B6nnTAeP34zFDlkUbX4AHudXU7bvxT5OUk9x9c2tj7xwxQH
aEhq2+JsYW0EVw27RLhbymnBfLjVVUktNF0nQGvU2TEocw4pr2ZkDHQkSnlbNa4k
ujlL7VzbpnEgyOmi5er9GaIuVSVADovBu+pz/Ov1y/3jUe8hZ/KleZUFhgRfoUka
AQwA2r2HiLvpnclyZMoeck1LFoVyEU/CjPcYWF1B76ekO9mrlYvbKsnsyL0WcuEq
wCmHdLk70i743Fn21WQK4uvvlvrEpev9aj9DihyLctv4qrPm6wAU/Xibf75tg1iR
L+muMQfv6hQhjdhwkYFx/7XQ6UWkEibqFS7xJwrhz9lHL4KTA4sO5PeW713+mpz7
tM5RmGV6NOQAyEEfAv6OawlWk0f5o8xngIoyo2BS5qIeEBO+iz45+GG8GQC6XufO
Ix7VVl++ZpsxZKtDq/AXfAskxfLRwZMqH9Db5pPMzrL1bPV16AwoWqhAGd2HIMkO
DLEC5XTGIKCqO5+n288rHhAJTqFmE7TpAo+Eb0Tkk4jfm6LyRonmQGpu/Zxa53n5
D6d+AgYWAMeHkEthWJkES4mKpZu4nV21+n9mynnPg8wzthL705Q6IBjtlxX8EP6e
eRFE1BUCNp2RZttTSdI+8iwzYsGOJdJeeXeLOGhvU9/PLkRj9jgZLgCLAo1QGo2o
xetZABEBAAH+BwMCYxRGMNwlr/T4SMsvXNo05Y9gvmJ/vNY89nIF3J/WsBcBChWT
MAls+3BDxHbEjjXb4sWQeGE5IxNUv1TMjZ1CLDAzga5Rm/KICYl3Yo6hWKRWk6qx
fdacQ8Z5aHXtQQ8qJxX2dIPbZtTkmhlCIj3B1H7xThFF/b+oh1+hV8F6kWuKZ2jJ
3cm+hy/sBpnENU0EOMvDAcQZ5QikmyyYPe03MMMEhl4Q51NbwFZi1Fnb1qYieGdh
lRX+92/+V5okFj0zTKLTtglwBcYobAs8Vlwa0bC9Bw5U21U1b4uU0wVrOHsdnGZp
LLZFXxON1t8ZSdNixus2kuUZDuCX2xGKestufSL+6rgf2pQAcoHI61uwwQT1LZGf
wmAieWHy6v+KWBODmTO6P6a3w1mCfI9gVATfWSuhbuIbqgUMLBWsimUB0pdWTwX9
oVKMS+OxL+ZHPoaixFwkFz6GqCJVRJ+rKafmjgOfmCWwCl3VoqGp/fkZKKgrBprP
HB1aIUkiiOvgPOW3ZbjG5SwBFSdjKt+KiWVEAVKnl9XAtzB+SS4fk2aKvezNB3Yf
LW6wmq4U+OkXEfGLpk1KJ81wb/D/ULAI3FRauxB5drlTwJ2mrWqeuJsR4A2sy49t
LKapWlbDlOvFtXtynultB/mc8mhphiaJdMoSKuOspiSSXNk/On9UdSVn0tDDlZEh
QU6iYwtvATo3Q1/RWZI74V4IZqt8R1d0Y+HIn8SNfUp3Zcs5vcqb+YvUGzqveLnl
Dn6ndCrLq7spDAFk9WVObApFYtnuEt9pKmrluQczckXwb7yH6CFCgoF+DjMdYi+L
En869iCp+jW2SVjo0q6SOODrIB2aiIEW8PoRIC/vFSTVgv528s7A6DjXeh0c/hkb
Ud3b5KNCJosz3RArv7ljiYq58Kj4scFr45orj80XulsLbr+tFaN3VNKgEsBDp0ZD
wgISoJr6fzAttqTKsPdzHGh3lNY5RNuP4r3VTgu3dN2ZxIDXxhiIWhbWiXmBz2p0
Y+TRwtgoUluDnMJhFDx8m1w07AqrLT7ivISgHrHwcDZgDGZ8l6rviDk3b8AsKtqY
r//yTXMpTC0kgEb89oHqRd2NiCS4R+2bjWZG+2CtQ7TpCYscbdNdYucEhQGiAUMk
7MJISwC0VSw3xesuHcF8Nx+5vY+GlTrZDIkrS0qKkmOvwSWP0xtSWa1jvIvsd4UK
yoHgDCdvME9UBeIrfqa9JfKAPFE1iGN3uXmq04hwnWwu/vybFA6IjeA2tfbFWWaO
oh2YyXDqhuL8HbUMESiyPOybFXm3aw6HRgIr3OM/R4O6Hv02zNeWJXnkATTKgTje
1xkJuQNXY5N6bpBPkw01Kr20IkJlYXN0IE1hbiA8YmVhc3RtYW5AZ3JleXNrdWxs
LmNvbT6JAdQEEwEIAD4WIQT2ReBH7lvE4oJMlNtC3JHPqKugKwUCX6FJGgIbAwUJ
A8JnAAULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRBC3JHPqKugK25hC/9VF1fe
kj0IKnrOJRUcK/Cv4RBowl60V91w27ApsoP2awEJiFhY7qRijtkA3NKrT3tke7aT
nC3yAJ8SFOmvIAC94ijb7Iv97xkG+1IIz8pvru9y+dzd2NnvCkts8gFF0CI/xtEM
E90rU3Pay9B5IyrpP++UdmSmnp3Neuwi94BZDfMlqkeiYOzWWSeYbmSSVfKTXeBd
UuTyfRI4m/bPbh6gegOB/XdgSIrNY74D0nR3np0I+s0IGZepK24kgBKfUPwRDk7f
98PXCh29iL3xH+TBxu30WHq7xKmPoXxCRyFLtnKF0MN5Ib276fHnJZM+hXf5i/1E
Pi4NLnk86e7fNI69hwiUd1msEt3VmZWe7anJe/1p3sSXwbQGhhGWM5K41/rQ1CZ9
qD95d6wkHRSc0n4z78qxgYV73yJHinN8xIFnPWbopPPIJbELSoM3IEpHobsj95pH
4hzZAPSmDfOfLzV1G2ec1QPfWnTqUriUt7edDs4//7Cczj6sRh2B6ax2diCdBYYE
X6FJGgEMAMqxn5io6fWKnMz8h5THqp4gEzDuoImapfMKbAKcxEtJmcLkvn+4ufEP
/hcll66InqJHsqMOrdb+zbduCruYWpizhqCIGSsuRu7+ZEEkQFmF5juCOV/5qKQJ
gZZmxSKbRtboapMRR/jmg1pvhnUG7wJOGWi7qv+iRdsWKskDO7tUQE34+ID7IwfD
Ze2fbFKxf66nPlUunF8aMglsvGmtCEzm/xwjunHnmoqZBQIzTdEXIaEwhVosbgY7
A1iwOJ/gT2dcF2KJa7tygrtcbgdVzYCibynwtlvDGXukweuYLQFsObyBG3UHRhJg
61p7n344sy1U9uwCP3/pVCr9bNY9mLZpCgHFkqxErmB8cWouQkbwnqxQFm21KtGF
zjUawuKBXVtDEeA8C5Ha0sx7lw5JrX8GD3EL60qKWjqujJsR1kyijXx1No7Xr9NW
WuPoIDYH06ZoYE+j065VTRqZIGr3NjUZnqT7s9M41roQMnKAzRBXousRXRW9dXfS
5YIG4nWTlwARAQAB/gcDAsDkrCv+8rcr+OKtXIf6oDyx2tbPr+tpZJII4Lqchego
FTB0/GoqHF+iu+uYDCuzkwXBSIAPTCudjhZ+0cwvO4WgjdqGC3zqCc4bCP68cItN
fcLsof5L7rJ8BXX/0YXhua3gFtWGw/EtGpO4tqFCrzkpgEvovP/N1CLFaHnRzWSN
AE0ebsdfTCRYjWuZiAKlWjKCMNmHrE7AB5TraGqclP5GlY28lm7T9KXnNXixFaaR
pLDaLFyGZDEilEjkCKx1cyg3oBNeqUP/Ra6DYEF3PWTGpX8PxBF4lA2qnq+XuUK8
30Nz2upz38Hb1jG1sdNlYEWLv05bFc0vMLWmzwAd6Ij0I4C6WdsakT213frlFw4w
+hoilBcrW5+UOBc1dbU3UFh72khzLdKz1aUVC2N5HN4gS7WTSw3of0sOy+LR4JaH
O8kSlZAIMXCooBDKr/R6x97A5sq8zMQ0vI0LSN2FpfwgdApwWLJYFBAy7ZJU9efO
0f79yEqk7d9xFEsIIpn2R+zVcUdAvj9/EDnbu/QaEj9jl+2PH6arqp1AGurF0Dp/
cB4T7ZCuaunIet5MqEN6Ac4WdjpEcC7tKjB4cQ53Y2f9zCSouXa4JUypsgm4AQZX
O6hejChIiFC31T2x0a3M6eD6+XNw64ShdyX6i153xOe07d78Zq5qnhw+Vz28FQjZ
Lmvbm1sj26WaZmLH6LzyjAJjjV4YH7ijwLjUMdeKeuato0fsCff/cVO7MKDj0aPe
zQCWSbqcnsxl1Agsop82k6Y3W9gco0tVhqYgIwmCjsWvCXAWILk2yIuxIxINRNqX
Pt+TYZR0BuDAUK4x15fUT5tXuu7DmmnmZlWlaba44dtJPB3SFtEM7jJ7xNF0zie3
G+6hxtZSmrjfYHBK2ZD+2veP1j0P/tYD/8n/rZx7u4pHuJiiZksj6ZwGF61HP03Y
zOu0LhhtrTQ1yYaE52mUdkLlQRNwD5b+qDkqN9/PeSzIoDesRVuVE2rbr7sIJ3GS
jxZHin4kHsxzqFQmecm1ctPgx+BpksMPok+MJGzSZ3OwE5tuK0VLvEIcMritgfM7
BixcNPyDv+RkwEguSMlsHq7Lrr+LXtR2XmeFBMgDiulZ5FUxoGiGBfyyG3bsAug9
D4ceg0yUh5HjTTjqoQ1Qo95Wi/yF64WPcZDllJ2BaBznDxlESNR/jmVhwQsrqOdn
NF54SCpU0e3N+XBsdHd4cRsS2lxemn5boK67/FVKdUmVgxo9VnAlreoeB+cVFHap
gfPSXD5V/MuFMiKSFuF63s61EP1T1Okl1cvrE2oAsJTXMgCIVSZdyLUwYKmjsMVD
rtfQXcoOQgFbA90qj7uOOtZId04NiQG8BBgBCAAmFiEE9kXgR+5bxOKCTJTbQtyR
z6iroCsFAl+hSRoCGwwFCQPCZwAACgkQQtyRz6iroCt8igwAgopqy+UgxJ7oTL2z
vOgL1ez7bv+E/U1/7Rdy5MHwr4WF6oZRpIBlgv3GXXeIFH9bFdDhgyPKgh+Tz24J
BL+7YjUtWGe/G/pmmNK1YazB/OxrwiGFpTCyk1zhxEkhMu7Hu3LgD571K+4TUUpa
PCqEeoBBg6O3T29DH1AxpWpEPGXlOrRDHYgVziEpLdUNahAjF53auNWvya+Vc2qZ
wM4NFt608LLf7J5yIA2vbsvf6+gVopPE3whXESKXo08B2hC1f3Pr9/Tgt6oIvy9/
dAcTMalxRyyc42E2wX5kyzDlfhY9kqaNNfaGMZJO5g//gB7BdtrAfo/LhWtary/Y
fAOtbbnMYkf+HODAPZItaIjMZngBM0c0m78YoCetAQE8uBFK6aXmht3BZGPOwgyZ
pK5QT6ClYst2N9ca3tPUEfnddotKySmCEk/JWtu5/0lFl75WzHulc7iUNGJmnUff
VZyH12CjBWsTtqombHDkdEKFocavqpVcCCbKbtW5GZhuZC65lQWGBF+hSUIBDADS
tlWquV7SdREZtxXBVVzdCkV1xkeHYfo2Z244W0LTwmvpbO+o6P5GCAW2c336qWEl
sMO9ujeV2nuUZy3k3AtJLx19iWC+ywYVzJ8f878XAxq0ya1VBBnfsBc7iRI3umf2
JSi+fHXf9l+rJ8Zr5AkLrUo3tQoxX8xWQIfUVY481nlkOvuMtxEI6h1t+z7PWjAJ
sdKKdevRPApPIBGXX0iGE/98ATsLYtvh9ln26j1SrSdtKpPktuYve3zkphlZAdf5
ReViicik6gpEdyEfIxNab6nyV8LTbSeCHe+6/cz+AEqA+cr3K3MwriaapPzNhRV8
izzGnIWChIZptGBKH5nLivfIAB/hbOgU6tM+YgUKrpJCXXA1My2q68o2kARJxh6s
0tuuT6pFEAG9RmzS3ywrPz4PAgkwrJA1uUa9fy9ngkOnQN3CEeVQTUU55b+6zVhW
1Qq8PII6AGqj1lSY9jLpjxEr3q227OlTaxfgg19x5o9rcyccAZlQqzL2p3Z7HZ0A
EQEAAf4HAwKyxiOcJwMkLPhbpalG07ErjqLt73SKDP3Qv5zzkUnBcqE0TbyFtFlp
HFf/Lv60X1m1OBgP4htz+JfikL5XVbWiGEBWvWPJP6VBBLJm+vjENjfKXzrRpcR6
zhpfmJXm3BSXSpRg746AVW5Tjt2Z+dG7leTL+bddgu321OLYrpghyOUblKnRJZ0g
0+vLByFbLWlgtFs3VxPQJw6FmdN9+m748xeVbqzxXwEzScpBZhcGrjHUgnYL4/XY
PxzmZpUZ3qFW/P0uPZ8PdzI9MjEXaDhdxxOj/TP3cc2+XnrpBeWAGajMtulMvt+O
PA/jisn0ZViy7q1fNz+B3j/V++l3UxRAHnI5yaRY91pPlOmnaG4ScCP2o7NAUIiA
/Q3O13hIvVB+iIt9Y3p5WQSBbppHURVlhOOxDkSpXuxe5OhXqFYuDjGyx6hId4xL
b/IP4Gs29ZSG4+6nOZa7GWl4M21Zcw0AX1Gs0+6PPuqlIecW+6e28xxwFQjj7IKt
OvHq6zI810ReWdw9qVp3g9mzqI8x9KcFGdDvZmd4sA0R9GYR6UhvTKTIhdV4wrdO
w2oBe3CpmEnrggtsTrUFykAfjuYRS6aYUjRVv6rdeiWFKyQzBqqboLO9si2RkuNK
H8P2G6BdsLMax/kZKoXuuQ39xq/Li8NJjAoWEMz8iiZ2Io7MGPZobXssoA18Q3dn
tNRPM06cojXoDkXxc5jkQMwJUpuAaa59Zcsgp0sFv7/8nez9ejCaEBTqm1pkEQtd
b6178ld2T6q1jMb/tHWl8CjhH1sZVX2DdEk4SraIFdtGD5vUXo9SkI/QiY0RYwtY
t3tzNnlWMPAWmC+GaZ9QjmPYwEGCXvaGZ4rB2iRPQ+wAvHV6b49txRckLSGm56jb
8WMY7hSC0q3Bj4vFSx78Ytn/H2xwEh/XUiXe1rZhFRXxf7ocLoVS8xx+FL94kzaC
pLKTKoX1udNmYtebkO9llpkW/Z4KeQWJ73uSsjTZhMXVr3fJRanH2twjY8XodG+J
KXuERxMkM2sqnhecsAS8yCLndFLGSDSWyNIA7o6VAtCOpS4TKlYRNmv2XOaxrGgT
7y4hZQBJT8CwP8QuZl9R4WBtNQLPOn9LJCYtKtOznpb2v27BEyeAk4DlKgHJAfcr
kT2xBj/UoJsD72iJMh7SOhmWr7T+gbwAnDlhM1TmtMfWUCAG9Y3fz/metPlMHCKv
ttrgTfyLyvQHgiDTh3iqNEZ7rGK46on72/YYS+DXA3uSNckCaNQXupoIrxqpDjfA
WrrmoRqL4IWMxHzF1RVKAsjXWaYtpbvlMOSNtyMff29WM+MkZqG3IdbkokKJdJf4
/+iQU+738VD43SdPurQcSGUgTWFuIDxoZW1hbkBncmV5c2t1bGwuY29tPokB1AQT
AQgAPhYhBIYPcR68MZb6cOhv9wDz8yhlQWZrBQJfoUlCAhsDBQkDwmcABQsJCAcC
BhUKCQgLAgQWAgMBAh4BAheAAAoJEADz8yhlQWZrD0YMAJp6WkrSzghIgrGmEquh
UPu4n8dnaGraGxu1Om9Z6HrUvphBvm/yZMlZxYbsQRvd8DUCuQD7fScBS12WX3AY
e001REfAbj0kDAdDQ0Z8sFCeCDSBJ9ulX07FzTHH0qROcSv6NONjGYVeTFicL2W0
rATygnFzzjjSGboMq1qA8u6/5JNM7MAxJcIS0Dr8Fhdwv8TwTJrVg6ZzJDHN8OVA
UkPaciQI5lDDP5+kOVqbZZ92Ua8byxKtNACCdSsWZr2OvYyjUz4JKMp5X6yHbDQB
3vlwRkRS7Voo3pUGsdLwiBWiryklSa++DIbBemrALFLc5YnLgfCV0frPOEqsdDwW
ECRxwN4r+2DjY6TYCEEDfhM2Hm7MoMx/jM4uhI4KwPdOKmHsBPVBeXqBRXz32NMM
Zg6to0HRjDapR8AkbfdC5vjiuwnDA6llmxnVtx2oPX3g8RVOIw65f8KfWzWSfzEq
hoKTccsHMMza8J1ax6T6HXkqa/Tt/B/3d7nUzp53V3luG50FhgRfoUlCAQwA4rFx
mKwr4RAoVEqhDDWl8ecd/KQXEg0iCpkkmED6mEpPE9qAi8ORNId66E+rveS1Ssbm
bqVlrN9iHphtvYqvlwwb2IkgPaFpmVSqWrQ3yzEPrL5CLAWiiEq7M4ux7pueYKcO
mv3wQSta9eMgy9jaGUXrxFl4qotCevcEsLzkKC045OdVxkL++NFsiQUSfMYOtgGK
XuBh0ycI/pOb66lY186zPT0tR+QA18uzeCizEjhCZmPIlPHjN8NOEM7ZLU4UQrLd
Srm1quhO6DvGEoO5FulvGtp5hVHdJL5oB7svzNurXB3WVjdXCnRijoaCR07A/X9J
VZY2+kRxdl6ZkuLZxb5UE6usW7pTA5DKiuFG/w6CSGZA1Dv3+yoZnjN8KhnGmIWm
EJgvddWWoaJ3wFvSAGkYa3qBLX3noV3ZCm0c/r2LBcyFGyuyddEhg9wrqWU9vM7W
/4BkTqSJdeMRlS9FD803V9GqxAJBJ1KOSFt2s6b+ekYCI/d+Buso8GPp8eUHABEB
AAH+BwMCnL1QLv+DJ3P4dP2//f1cC7xTsDp9/ogeuz8gxIm6aWtNBhgWgRVgXnma
HsmQeEm7c70Vvt+Kjo9DbKUQbo32pc1Gwd8wvnNZUKtj+9E71hDd05f/SiA2ZTck
8AIRgRUV30Nj2qEgg0nFCWDNfMf0Lx7XH5APMJEZ2GXioiUdUInFlfXBvK6zv4wO
0jIyB/lRO5sCLcC8jNsNfe5oQVcoizziMxaAK91Fv93DeVa2hwqTK3VqBPXa/uyz
6iRMYe//nYIJCNllEsY8whKKfsskIOk1Dwofyuh2IYP6dv3SXhTj+l+qp1uqsg2r
JiThiyNXs0+zeVRxURBSZJrxMLHAs4tdcyckt0fCvM6bUCcDRo9+6w52GSMtb1w3
08oJ+4YOLilJIR041x+Jzs1oTMhAWI5XH02x0mEFKADg/iSexOFSKIfT5RvFYFEj
Fpil+RalypUWzoxjaHFrSV9gxXpdys/qlHb4dr/nMTc/42x2d1xH6HTmlLtTp3rl
vM8/6tmeIhdTkfPtWIyrSfmi61ZCTJ21tKgDNj8r79lxkB6vqX+c86X/ug1tv3Ma
BN96f4QJOGHIhefImnStAw7OyQn3F9qnkj3x7u2f9f1XyDJO4T+WaQcYf67DEDy0
KLpxwjEuT63BYIiWcQHli27lGOj8gAalTnDaWOWRckw7KAemL6cMGwZAHT91aHVH
IKd+dwl3gbArYJxWQ9Fc7lF0Nv4BfEghCOssrQuli9jKkhok61pQrx6L/ekkfeRt
mBjDtZOCO5NOTeeAZlc23TpZ/yjSBY/GPY0jXfnZ40Vm/Kl5VHW3Poc/rJDI67/5
Zz+mL/sTOh2SRKUzsDGQqoQeq0ud1o8LQNf9m/3R+qxII3UsaRxKPDM4O2z7uLqG
v6DG9WVO/6nvoEMrItyh01BfU8l4zLkvXpkcrgbRT4D62w5BgoYpAfHprdqwxCDr
gRIiRKgNy2+kfxv6MVaTlmO8Fa9CR5wxeynx+YvtlIjVEF9SXQaXyb1g/zmnimn2
kAjp/zdPQDLtZRW/cR6EEP6h8zW2jg3p+Owh9tVaZ1WoDfuelRMCuFFoiJ0RHXRQ
ocSzXfw7cB0YCpWR8Rrr0QlQYh/GEbQahTjjk+x0FXmEiCGkvOQeBFY2KUG/597g
maYHwRqfP2LjprG1mFgk0wUz6Juf86RZYD1XszIQPAL1CXf8kSuh49t7MRSgCiSo
qfMfZsMZgftjld5pD0lEqbpohHw/qpZdEklqUpkNUxJbBCWr9lPKirKadeLiXLKP
JI6Q0UEKkdw6lRLrg7UoDtr0vx/Izb3QB1jpKX7m1E/YZhTeVgYnLjrHCBjhJ8cE
kFmM7YC0iuh1TduJAbwEGAEIACYWIQSGD3EevDGW+nDob/cA8/MoZUFmawUCX6FJ
QgIbDAUJA8JnAAAKCRAA8/MoZUFma/gCC/9xkH8EF1Ka3TUa1kiBdcII4dyoX7gs
/dA/os0+fLb/iZZcG+bJZcKLma7DRiyDGXYc7nG3uPvho7/cOCUUg5P/EG5z0CDX
zLbmBrk2WlRnREmK/5NTcisCyezRMXHOxpya4pmExVMqSPGA0QbKGwdHqfbHQv2O
yI3PYBKvlN+eu6e5SEbT76AQijj5RSPcgbko24/sSqJylD1lnRocQK1p4XelosBr
aty4wzYSvQY9dRD4nafxPHI3YjKiAG0I7nJDQ0d1jDaW5FP0BkMvn51SmfGsuSg1
s46h9JlGRZvS0enjBb1Ic9oBmHAWGQhlD1hvILlqIZOCdj8oWVjwmpZ7BK3/82wO
dVkUxy09IdIot+AIH+F/LA3KKgfDmjldyhXjI/HDrpmXwSUkJOBHebNLz5t1Edau
F+4DY5BHMsgtyyiYJBzRGT5pgrXMt4yCqZP+0jZwKt1Ech/Q6djIKjt+9wOGe9UB
1VrzRbOS5ymseDJcjejtMxuCOuSTN9R5KuQ=
=VqO+
-----END PGP PRIVATE KEY BLOCK-----

View File

@@ -1,151 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQGNBF+hSOgBDACpkPQEjADjnQtjmAsdPYpx5N+OMJBYj1DAoIYsDtV6vbcBJQt9
4Om3xl7RBhv9m2oLgzPsiRwjCEFRWyNSu0BUp5CFjcXfm0S4K2egx4erFnTnSSC9
S6tmVNrVNEXvScE6sKAnmJ7JNX1ExJuEiWPbUDRWJ1hoI9+AR+8EONeJRLo/j0Np
+S4IFDn0PsxdT+SB0GY0z2cEgjvjoPr4lW9IAb8Ft9TDYp+mOzejn1Fg7CuIrlBR
SAv+sj7bVQw15dh1SpbwtS5xxubCa8ExEGI4ByXmeXdR0KZJ+EA5ksO0iSsQ/6ip
SOdSg+i0niOClFNm1P/OhbUsYAxCUfiX654FMn2zoxVBEjJ3e7l0pH7ktodaxEct
PofQLBA9LSDUIejqJsU0npw/DHDD2uvxG+/A6lgV9L8ETlvgp8RzeOCf2bHuiKYY
z87txvkFwsXgU1+TZxbk+mtCBbngsVPLNarY/KGkVJL+yhcHRD0Pl4wXUd6auQuY
6vQ9AuKiCT1We2sAEQEAAbQeTWVyIE1hbiA8bWVybWFuQGdyZXlza3VsbC5jb20+
iQHUBBMBCAA+FiEE8/r2aOgu9RJNUYe67yb0aCND9pIFAl+hSOgCGwMFCQPCZwAF
CwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQ7yb0aCND9pLwiwwAhFJbAyUK05TJ
KfDz81757N472STtB8sfr0auwmRr8Zs1utHRVM0b/jkjTuo4uJNr7YVVKTKgE7+r
J+pwhm3wlTQ44LVLjByWAi/7NWg3E9b2elm+qkfgm/RfFt3vkuOxGSyZyIFFh+/t
wv6iABPvr6w7MZwrFaS0UP3g1VGa5TFqg6KNxod9H/gPLxv45lutXf3VvBZTJpr1
pxn7aLHlFzEyIgNZbP/N1QF44GSrN/k0DfL631sZjauUXaZXbi5xGsKKCYwJ1g3q
587pi6mTdTV3n0hKgVuipO8hGy5++YeOv+hXsCxDwyZ+Shv+qavd/SapxYgCdEue
uwONIFfsIsWCd3SCcjKXicTTEFMu8nvBmf7xuo2hv6vEOxoijlXV+4LkGrskdB8Z
Mg8PywEx6DLmDokgnAhTLrTc1ShbkOtQ3yNjjyFK7BDpqobsJal6d8SpbhccUJLe
paSmsk0CgJsTjhAl6EwX0EYgTo3kP5fScqrbD8VwQaT8CcE4rCV4uQGNBF+hSOgB
DADHtpTT1k4x+6FN5OeURpKAaIsoPHghkJ2lb6yWmESCa+DaR6GXAKlbd0L9UMcX
LqnaCn4SpZvbf8hP4fJRgWdRl5uVN/rmyVbZLUVjM8NcVdFRIrTsNyu4mLBmydc3
iA/90sCTEOj9e7DSvxLmmLFjpwM5xXLd6z0l6+9G+woNmARXVS3V/RryFntyKC3A
TCqVlJoQBG45Tj2gMIunpadTJXWmdioooeGW3sLeUv5MM98mSB4SjKRlJqGPNjx5
lO6MmJbZeXZ/L/aO6EsXUQD2h82Wphll4rpGYWPiHTCYqZYiqNYr6E3xUpzcvWVp
3uCYVJWP6Ds117p7BoyKVz00yxC9ledF3eppktZWqFVowCMihQE3676L3DDTZsnJ
f1/8xKUh5U2Mj3lBvjlvCECKi00qo8b1mn/OklQjJ5T4WzTrH6X+/zpez8ZkmtcO
ayHdUKD/64roZ9dXbXG/hp5A+UWj8oSVYKg2QNAwAnZ+aiZ2KVRE/Y61DCgFg6Cc
x/cAEQEAAYkBvAQYAQgAJhYhBPP69mjoLvUSTVGHuu8m9GgjQ/aSBQJfoUjoAhsM
BQkDwmcAAAoJEO8m9GgjQ/aSIPcL/3jqL2A2SmC+s0BO4vMPEfCpa2gZ/vo1azzj
UieZu5WhIxb5ik0V6T75EW5F0OeZj9qXI06gW+IM8+C6ImUgaR3l47UjBiBPq+uK
O9QuT/nOtbSs2dXoTNCLMQN7MlrdUBix+lnqZZGSDgh6n/uVyAYw8Sh4c3/3thHU
iR7xzVKGxAKDT8LoVjhHshTzYuQq8MqlfvwVI4eESLaryQ+Y+j5+VLDzSLgPAnnI
qF/ui2JQjefJxm/VLoYNaPAGdqoz/u/R0Tmz94bZUfLjgQaDoUpnxYywK2JGlf3m
PZ3PNWjxJzuQTF5Ge5bz/TylnRYIyBT7KD7oaKHO62fhDbYPJ4f94iZN4B6nnTAe
P34zFDlkUbX4AHudXU7bvxT5OUk9x9c2tj7xwxQHaEhq2+JsYW0EVw27RLhbymnB
fLjVVUktNF0nQGvU2TEocw4pr2ZkDHQkSnlbNa4kujlL7VzbpnEgyOmi5er9GaIu
VSVADovBu+pz/Ov1y/3jUe8hZ/KleZkBjQRfoUkaAQwA2r2HiLvpnclyZMoeck1L
FoVyEU/CjPcYWF1B76ekO9mrlYvbKsnsyL0WcuEqwCmHdLk70i743Fn21WQK4uvv
lvrEpev9aj9DihyLctv4qrPm6wAU/Xibf75tg1iRL+muMQfv6hQhjdhwkYFx/7XQ
6UWkEibqFS7xJwrhz9lHL4KTA4sO5PeW713+mpz7tM5RmGV6NOQAyEEfAv6OawlW
k0f5o8xngIoyo2BS5qIeEBO+iz45+GG8GQC6XufOIx7VVl++ZpsxZKtDq/AXfAsk
xfLRwZMqH9Db5pPMzrL1bPV16AwoWqhAGd2HIMkODLEC5XTGIKCqO5+n288rHhAJ
TqFmE7TpAo+Eb0Tkk4jfm6LyRonmQGpu/Zxa53n5D6d+AgYWAMeHkEthWJkES4mK
pZu4nV21+n9mynnPg8wzthL705Q6IBjtlxX8EP6eeRFE1BUCNp2RZttTSdI+8iwz
YsGOJdJeeXeLOGhvU9/PLkRj9jgZLgCLAo1QGo2oxetZABEBAAG0IkJlYXN0IE1h
biA8YmVhc3RtYW5AZ3JleXNrdWxsLmNvbT6JAdQEEwEIAD4WIQT2ReBH7lvE4oJM
lNtC3JHPqKugKwUCX6FJGgIbAwUJA8JnAAULCQgHAgYVCgkICwIEFgIDAQIeAQIX
gAAKCRBC3JHPqKugK25hC/9VF1fekj0IKnrOJRUcK/Cv4RBowl60V91w27ApsoP2
awEJiFhY7qRijtkA3NKrT3tke7aTnC3yAJ8SFOmvIAC94ijb7Iv97xkG+1IIz8pv
ru9y+dzd2NnvCkts8gFF0CI/xtEME90rU3Pay9B5IyrpP++UdmSmnp3Neuwi94BZ
DfMlqkeiYOzWWSeYbmSSVfKTXeBdUuTyfRI4m/bPbh6gegOB/XdgSIrNY74D0nR3
np0I+s0IGZepK24kgBKfUPwRDk7f98PXCh29iL3xH+TBxu30WHq7xKmPoXxCRyFL
tnKF0MN5Ib276fHnJZM+hXf5i/1EPi4NLnk86e7fNI69hwiUd1msEt3VmZWe7anJ
e/1p3sSXwbQGhhGWM5K41/rQ1CZ9qD95d6wkHRSc0n4z78qxgYV73yJHinN8xIFn
PWbopPPIJbELSoM3IEpHobsj95pH4hzZAPSmDfOfLzV1G2ec1QPfWnTqUriUt7ed
Ds4//7Cczj6sRh2B6ax2diC5AY0EX6FJGgEMAMqxn5io6fWKnMz8h5THqp4gEzDu
oImapfMKbAKcxEtJmcLkvn+4ufEP/hcll66InqJHsqMOrdb+zbduCruYWpizhqCI
GSsuRu7+ZEEkQFmF5juCOV/5qKQJgZZmxSKbRtboapMRR/jmg1pvhnUG7wJOGWi7
qv+iRdsWKskDO7tUQE34+ID7IwfDZe2fbFKxf66nPlUunF8aMglsvGmtCEzm/xwj
unHnmoqZBQIzTdEXIaEwhVosbgY7A1iwOJ/gT2dcF2KJa7tygrtcbgdVzYCibynw
tlvDGXukweuYLQFsObyBG3UHRhJg61p7n344sy1U9uwCP3/pVCr9bNY9mLZpCgHF
kqxErmB8cWouQkbwnqxQFm21KtGFzjUawuKBXVtDEeA8C5Ha0sx7lw5JrX8GD3EL
60qKWjqujJsR1kyijXx1No7Xr9NWWuPoIDYH06ZoYE+j065VTRqZIGr3NjUZnqT7
s9M41roQMnKAzRBXousRXRW9dXfS5YIG4nWTlwARAQABiQG8BBgBCAAmFiEE9kXg
R+5bxOKCTJTbQtyRz6iroCsFAl+hSRoCGwwFCQPCZwAACgkQQtyRz6iroCt8igwA
gopqy+UgxJ7oTL2zvOgL1ez7bv+E/U1/7Rdy5MHwr4WF6oZRpIBlgv3GXXeIFH9b
FdDhgyPKgh+Tz24JBL+7YjUtWGe/G/pmmNK1YazB/OxrwiGFpTCyk1zhxEkhMu7H
u3LgD571K+4TUUpaPCqEeoBBg6O3T29DH1AxpWpEPGXlOrRDHYgVziEpLdUNahAj
F53auNWvya+Vc2qZwM4NFt608LLf7J5yIA2vbsvf6+gVopPE3whXESKXo08B2hC1
f3Pr9/Tgt6oIvy9/dAcTMalxRyyc42E2wX5kyzDlfhY9kqaNNfaGMZJO5g//gB7B
dtrAfo/LhWtary/YfAOtbbnMYkf+HODAPZItaIjMZngBM0c0m78YoCetAQE8uBFK
6aXmht3BZGPOwgyZpK5QT6ClYst2N9ca3tPUEfnddotKySmCEk/JWtu5/0lFl75W
zHulc7iUNGJmnUffVZyH12CjBWsTtqombHDkdEKFocavqpVcCCbKbtW5GZhuZC65
mQGNBF+hSUIBDADStlWquV7SdREZtxXBVVzdCkV1xkeHYfo2Z244W0LTwmvpbO+o
6P5GCAW2c336qWElsMO9ujeV2nuUZy3k3AtJLx19iWC+ywYVzJ8f878XAxq0ya1V
BBnfsBc7iRI3umf2JSi+fHXf9l+rJ8Zr5AkLrUo3tQoxX8xWQIfUVY481nlkOvuM
txEI6h1t+z7PWjAJsdKKdevRPApPIBGXX0iGE/98ATsLYtvh9ln26j1SrSdtKpPk
tuYve3zkphlZAdf5ReViicik6gpEdyEfIxNab6nyV8LTbSeCHe+6/cz+AEqA+cr3
K3MwriaapPzNhRV8izzGnIWChIZptGBKH5nLivfIAB/hbOgU6tM+YgUKrpJCXXA1
My2q68o2kARJxh6s0tuuT6pFEAG9RmzS3ywrPz4PAgkwrJA1uUa9fy9ngkOnQN3C
EeVQTUU55b+6zVhW1Qq8PII6AGqj1lSY9jLpjxEr3q227OlTaxfgg19x5o9rcycc
AZlQqzL2p3Z7HZ0AEQEAAbQcSGUgTWFuIDxoZW1hbkBncmV5c2t1bGwuY29tPokB
1AQTAQgAPhYhBIYPcR68MZb6cOhv9wDz8yhlQWZrBQJfoUlCAhsDBQkDwmcABQsJ
CAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJEADz8yhlQWZrD0YMAJp6WkrSzghIgrGm
EquhUPu4n8dnaGraGxu1Om9Z6HrUvphBvm/yZMlZxYbsQRvd8DUCuQD7fScBS12W
X3AYe001REfAbj0kDAdDQ0Z8sFCeCDSBJ9ulX07FzTHH0qROcSv6NONjGYVeTFic
L2W0rATygnFzzjjSGboMq1qA8u6/5JNM7MAxJcIS0Dr8Fhdwv8TwTJrVg6ZzJDHN
8OVAUkPaciQI5lDDP5+kOVqbZZ92Ua8byxKtNACCdSsWZr2OvYyjUz4JKMp5X6yH
bDQB3vlwRkRS7Voo3pUGsdLwiBWiryklSa++DIbBemrALFLc5YnLgfCV0frPOEqs
dDwWECRxwN4r+2DjY6TYCEEDfhM2Hm7MoMx/jM4uhI4KwPdOKmHsBPVBeXqBRXz3
2NMMZg6to0HRjDapR8AkbfdC5vjiuwnDA6llmxnVtx2oPX3g8RVOIw65f8KfWzWS
fzEqhoKTccsHMMza8J1ax6T6HXkqa/Tt/B/3d7nUzp53V3luG7kBjQRfoUlCAQwA
4rFxmKwr4RAoVEqhDDWl8ecd/KQXEg0iCpkkmED6mEpPE9qAi8ORNId66E+rveS1
SsbmbqVlrN9iHphtvYqvlwwb2IkgPaFpmVSqWrQ3yzEPrL5CLAWiiEq7M4ux7pue
YKcOmv3wQSta9eMgy9jaGUXrxFl4qotCevcEsLzkKC045OdVxkL++NFsiQUSfMYO
tgGKXuBh0ycI/pOb66lY186zPT0tR+QA18uzeCizEjhCZmPIlPHjN8NOEM7ZLU4U
QrLdSrm1quhO6DvGEoO5FulvGtp5hVHdJL5oB7svzNurXB3WVjdXCnRijoaCR07A
/X9JVZY2+kRxdl6ZkuLZxb5UE6usW7pTA5DKiuFG/w6CSGZA1Dv3+yoZnjN8KhnG
mIWmEJgvddWWoaJ3wFvSAGkYa3qBLX3noV3ZCm0c/r2LBcyFGyuyddEhg9wrqWU9
vM7W/4BkTqSJdeMRlS9FD803V9GqxAJBJ1KOSFt2s6b+ekYCI/d+Buso8GPp8eUH
ABEBAAGJAbwEGAEIACYWIQSGD3EevDGW+nDob/cA8/MoZUFmawUCX6FJQgIbDAUJ
A8JnAAAKCRAA8/MoZUFma/gCC/9xkH8EF1Ka3TUa1kiBdcII4dyoX7gs/dA/os0+
fLb/iZZcG+bJZcKLma7DRiyDGXYc7nG3uPvho7/cOCUUg5P/EG5z0CDXzLbmBrk2
WlRnREmK/5NTcisCyezRMXHOxpya4pmExVMqSPGA0QbKGwdHqfbHQv2OyI3PYBKv
lN+eu6e5SEbT76AQijj5RSPcgbko24/sSqJylD1lnRocQK1p4XelosBraty4wzYS
vQY9dRD4nafxPHI3YjKiAG0I7nJDQ0d1jDaW5FP0BkMvn51SmfGsuSg1s46h9JlG
RZvS0enjBb1Ic9oBmHAWGQhlD1hvILlqIZOCdj8oWVjwmpZ7BK3/82wOdVkUxy09
IdIot+AIH+F/LA3KKgfDmjldyhXjI/HDrpmXwSUkJOBHebNLz5t1EdauF+4DY5BH
MsgtyyiYJBzRGT5pgrXMt4yCqZP+0jZwKt1Ech/Q6djIKjt+9wOGe9UB1VrzRbOS
5ymseDJcjejtMxuCOuSTN9R5KuSZAY0EX6UhqgEMAO/22am2Urhbg5ClpEYzz2/W
L8ez3tkXKQZa7PsvKUv69jwBwNQmEpMhIPFXhKKwcmmLgYcvnd64xrXM5STxWedy
NaTPlCUDZGW+N4laCbrnHN98Ztu9TvjfjjiQvhHjD/9Ilc5fw5nZsawwTtGOwCkK
opBVKsgHaGrKRl7QP2RTwITwo7CkDBf77kp8wGCECrrSel0cVezSf6UmDs7V3q12
zf7gXBSjWlbA3NnSok6kTNej14IMKfdhiuUG6WFibxEfsOrm8Rv9RbbgpYUTN/ll
yWDTqbVDjYefq/iLs/5w24oI/1K0gy8Rzdl5qu4cvqcwkmxQMvtWWHoT86iHFff/
pp9drPctFfUetHDIKY+1U9VLilaSeCcDx7PCgxazCb7gmtTQWdM4wHcH5+69EEeG
lP4N9efoTRlU+m5ZSuqOtEvLXtP2Gnd/Tgn3lBjHjA+hQ65G96fv40dbYiiHxluo
cFkcwz118alx4cXStfAi6nCsDid2Y9NgWfHrJTs33QARAQABtDlMb3VpcyBIb2xi
cm9vayA8YWNjb3VudHMtZ3Jhc3Nyb290c2Vjb25vbWljc0Bob2xicm9vay5ubz6J
AdMEEwEIAD4WIQTFMBghgDfv6cesuTGwIKs7vZC0mAUCX6UhqgIbAwUJA8JnAAUL
CQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRCwIKs7vZC0mEnCC/ih5hk43xHu6KBA
on/ox6wcYLltSuaJawJbPmznrOK6aIJGUDx6E/VEjFeU/+bySrm8y3gk3jqeoPRk
holZmvRW5/mJ/uud+7B83TOfLAHM1EgZtqCqq+Z61yrt46cjqXuQbonP5dFmnOee
Zpg4TP7FRCjAYThy/NtOXY7Ob3OC+MNnDPo71R2se+x4Ac6NKsmNKAETZjZg01R/
2w5Ns77pxub+tihAVasLzmqlGNjqRzLCemR4osMwd0XrtziKiIvPYFlHhysgkpYh
oqbs2bjEdbsac21460j+3wRcvspfDEiLmI0n91s5uK3zCke0tI8BbU5/7IfnflBw
9SVvcu5s6DqFjy3tuRVkVKs0h92YCEH6gfui1RkXPdFheGyZwvZujlfCTU0c2V8U
pmy2TvdUSsWiHSNgY8nimWbxU1fXt7fnWnQk59/Nov4zRO4AJQhXgrb/IyhjKMVa
UYUV/yQgVOqbv5VJnQFTuZrTm5yF47em99wmlZ06cJk0Q6I3QLkBjQRfpSGqAQwA
0WyxXsatq9/kfN8Pd/tRjjUQlo0r9GuAKds8mKyWqk+hsOGYaTczL4qjne4Euwt1
lWg2cC3jsX/9Ai4IX79Kkt4hOk0RbW76+YJJiL6CwsyfyPJASEq2ZqoVBgUJuBbw
uEpMe0OL9ciEJ5oRLwZNgLXZZoHQaYlthHycvC3vEPeTtpGwYj+DfGQmt3af77i9
0xQa1uor3QcvmmkDUb7/6Xv0Qwn8/sQ+GKyPhFia/OOZ50gnGVv2qKCFK1oaWNGz
I5ywhD2Ij2j9ah9M08CEgWVFFibf7PRIq78yRoV9+ZCjWlIq0m2LjCykV9aEnWuw
rWJaUtLn3i2roMieOIILhUUgDemAIV+vlbIlxZDp+XGsbhZ+MJpMUwpfEKK3Q7sD
8tPbH6/2QpPnAezVUnwJJAg8pMLo+QzhTAd3PyPdIkc3yVQQuCycU9El7ysKhmiS
AgqZjqrtPnU4y7SY1II1y/XDFDIuw2MggdolNahzx5VTKrNm5LYUT0m5XQmCehtJ
ABEBAAGJAbwEGAEIACYWIQTFMBghgDfv6cesuTGwIKs7vZC0mAUCX6UhqgIbDAUJ
A8JnAAAKCRCwIKs7vZC0mI4eC/0cMG9+fZfyq8V7wB47L/qmfS0O+bSE4AlZjtoa
30UqbW8Yp2oa1uZXaF8loC3RW9d7VdnSh6K5vSnHh/0+OkwqAbpYDaF6Kuk/zVHX
r4vQ1FaE1uzZisaqEORW/LG+oWiOFDhF7lXGVKj7iXwfFVVudDxHLHj34dC9rrsm
5cTNdHalP0OW00H17nM/R4CR62mkhIM7zUuA1Z8MxSa8I3A9SL35G9iaWRYXE892
KcPYSAgLna7rW+gHD1QI0sqsR6qdaojO5BDVrEYnP58D5aCOTeZ50ACO43JaZlYm
o3jdqBvvYKpYuJ2is1T3unnrY6ztblz78OE+37d9gyAp1j0dhIzOfdpSHCyUYUQL
4YNGLs/3yfelr1XXLwYXzKlioNDu7k4rggwN3td1122p3U1vfVY4qb2eTyjDPizb
NdtbiWRXKFibzG0OoiAaq0ZC8nZsP6xy7vmI05hN7PocAWllJVdpaXVh75OViGaj
KuNFGWsKI1qTd1aEMRQzT4s+JJM=
=8257
-----END PGP PUBLIC KEY BLOCK-----

View File

@@ -1,317 +0,0 @@
import Automerge = require('automerge');
import assert = require('assert');
import fs = require('fs');
import pgp = require('openpgp');
import sqlite = require('sqlite3');
import * as handlers from '../scripts/server/handlers';
import { Envelope, Syncable, ArgPair } from '../src/sync';
import { PGPKeyStore, PGPSigner, KeyStore, Signer } from '../src/auth';
import { SqliteAdapter } from '../src/db';
function createKeystore() {
const pksa = fs.readFileSync(__dirname + '/privatekeys.asc', 'utf-8');
const pubksa = fs.readFileSync(__dirname + '/publickeys.asc', 'utf-8');
return new Promise<PGPKeyStore>((whohoo, doh) => {
let keystore = undefined;
try {
keystore = new PGPKeyStore('merman', pksa, pubksa, pubksa, pubksa, () => {
whohoo(keystore);
});
} catch(e) {
doh(e);
}
if (keystore === undefined) {
doh();
}
});
}
function createDatabase(sqlite_file:string):Promise<any> {
try {
fs.unlinkSync(sqlite_file);
} catch {
}
return new Promise((whohoo, doh) => {
//const db = new sqlite.Database(sqlite_file, (e) => {
const dbconf = {
name: sqlite_file,
port: undefined,
host: undefined,
user: undefined,
password: undefined,
}
const db = new SqliteAdapter(dbconf);//, (e) => {
// if (e) {
// doh(e);
// return;
// }
const sql = `CREATE TABLE store (
id integer primary key autoincrement,
owner_fingerprint text not null,
hash char(64) not null unique,
content text not null
);
`
console.log(sql);
db.query(sql, (e) => {
if (e) {
doh(e);
return;
}
whohoo(db);
});
// });
});
}
function wrap(s:Syncable, signer:Signer) {
return new Promise<Envelope>((whohoo, doh) => {
s.setSigner(signer);
s.onwrap = async (env) => {
if (env === undefined) {
doh();
return;
}
whohoo(env);
}
s.sign();
});
}
async function signData(d:string, keyStore:KeyStore) {
const digest = await pgp.message.fromText(d);
const opts = {
message: digest,
privateKeys: [keyStore.getPrivateKey()],
detached: true,
};
const signature = await pgp.sign(opts);
return {
data: signature.signature,
engine: 'pgp',
algo: 'sha256',
digest: d,
};
}
describe('server', async () => {
await it('put_client_then_retrieve', async () => {
const keystore = await createKeystore();
const signer = new PGPSigner(keystore);
const digest = 'deadbeef';
const s = new Syncable(digest, {
bar: 'baz',
});
const db = await createDatabase(__dirname + '/db.one.sqlite');
let env = await wrap(s, signer);
let j = env.toJSON();
const content = await handlers.handleClientMergePut(j, db, digest, keystore, signer);
assert(content); // true-ish
let v = await handlers.handleNoMergeGet(db, digest, keystore);
if (v === undefined) {
db.close();
assert.fail('');
}
v = await handlers.handleClientMergeGet(db, digest, keystore);
if (v === undefined) {
db.close();
assert.fail('');
}
db.close();
});
await it('client_merge', async () => {
const keystore = await createKeystore();
const signer = new PGPSigner(keystore);
const db = await createDatabase(__dirname + '/db.two.sqlite');
// create new, sign, wrap
const digest = 'deadbeef';
let s = new Syncable(digest, {
bar: 'baz',
});
await wrap(s, signer)
// create client branch, sign, wrap, and serialize
let update = new ArgPair('baz', 666)
s.update([update], 'client branch');
let env = await wrap(s, signer)
const j_client = env.toJSON();
// create server branch, sign, wrap, and serialize
update = new ArgPair('baz', [1,2,3]);
s.update([update], 'client branch');
env = await wrap(s, signer)
const j_server = env.toJSON();
assert.notDeepEqual(j_client, j_server);
let v = await handlers.handleClientMergePut(j_server, db, digest, keystore, signer);
assert(v); // true-ish
v = await handlers.handleClientMergePut(j_client, db, digest, keystore, signer);
assert(v); // true-ish
const j = await handlers.handleClientMergeGet(db, digest, keystore);
env = Envelope.fromJSON(j);
s = env.unwrap();
db.close();
});
await it('server_merge', async () => {
const keystore = await createKeystore();
const signer = new PGPSigner(keystore);
const db = await createDatabase(__dirname + '/db.three.sqlite');
const digest = 'deadbeef';
let s = new Syncable(digest, {
bar: 'baz',
});
let env = await wrap(s, signer)
let j:any = env.toJSON();
let v = await handlers.handleClientMergePut(j, db, digest, keystore, signer);
assert(v); // true-ish
j = await handlers.handleNoMergeGet(db, digest, keystore);
assert(v); // true-ish
let o = JSON.parse(j);
o.bar = 'xyzzy';
j = JSON.stringify(o);
let signMaterial = await handlers.handleServerMergePost(j, db, digest, keystore, signer);
assert(signMaterial)
env = Envelope.fromJSON(signMaterial);
const w = env.unwrap();
console.log('jjjj', w, env);
const signedData = await signData(w.m.signature.digest, keystore);
o = {
'm': env,
's': signedData,
}
j = JSON.stringify(o);
v = await handlers.handleServerMergePut(j, db, digest, keystore, signer);
assert(v);
j = await handlers.handleNoMergeGet(db, digest, keystore);
assert(j); // true-ish
o = JSON.parse(j);
console.log(o);
db.close();
});
await it('server_merge', async () => {
const keystore = await createKeystore();
const signer = new PGPSigner(keystore);
const db = await createDatabase(__dirname + '/db.three.sqlite');
const digest = 'deadbeef';
let s = new Syncable(digest, {
bar: 'baz',
});
let env = await wrap(s, signer)
let j:any = env.toJSON();
let v = await handlers.handleClientMergePut(j, db, digest, keystore, signer);
assert(v); // true-ish
j = await handlers.handleNoMergeGet(db, digest, keystore);
assert(v); // true-ish
let o = JSON.parse(j);
o.bar = 'xyzzy';
j = JSON.stringify(o);
let signMaterial = await handlers.handleServerMergePost(j, db, digest, keystore, signer);
assert(signMaterial)
env = Envelope.fromJSON(signMaterial);
console.log('envvvv', env);
const signedData = await signData(env.o['digest'], keystore);
console.log('signed', signedData);
o = {
'm': env,
's': signedData,
}
j = JSON.stringify(o);
console.log(j);
v = await handlers.handleServerMergePut(j, db, digest, keystore, signer);
assert(v);
j = await handlers.handleNoMergeGet(db, digest, keystore);
assert(j); // true-ish
o = JSON.parse(j);
console.log(o);
db.close();
});
// await it('server_merge_empty', async () => {
// const keystore = await createKeystore();
// const signer = new PGPSigner(keystore);
//
// const db = await createDatabase(__dirname + '/db.three.sqlite');
//
// const digest = '0xdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef';
// let o:any = {
// foo: 'bar',
// xyzzy: 42,
// }
// let j:any = JSON.stringify(o);
//
// let signMaterial = await handlers.handleServerMergePost(j, db, digest, keystore, signer);
// assert(signMaterial)
//
// const env = Envelope.fromJSON(signMaterial);
//
// console.log('envvvv', env);
//
// const signedData = await signData(env.o['digest'], keystore);
// console.log('signed', signedData);
//
// o = {
// 'm': env,
// 's': signedData,
// }
// j = JSON.stringify(o);
// console.log(j);
//
// let v = await handlers.handleServerMergePut(j, db, digest, keystore, signer);
// assert(v);
//
// j = await handlers.handleNoMergeGet(db, digest, keystore);
// assert(j); // true-ish
// o = JSON.parse(j);
// console.log(o);
//
// db.close();
// });
});

View File

@@ -1,24 +0,0 @@
{
"compilerOptions": {
"baseUrl": ".",
"outDir": "./dist.browser",
"target": "es5",
"module": "commonjs",
"moduleResolution": "node",
"lib": ["es2016", "dom", "es5"],
"esModuleInterop": true,
"allowJs": true,
"resolveJsonModule": true
},
"exclude": [
"node_modules",
"dist",
"scripts",
"tests"
],
"include": [
"src/**/*",
"scripts/server/*",
"index.ts"
]
}

View File

@@ -1,24 +0,0 @@
var webpack = require('webpack');
const path = require('path');
module.exports = {
entry: {
index: './dist/index.js',
},
output: {
path: path.resolve(__dirname, 'dist-web'),
filename: 'cic-meta.web.js',
library: 'cicMeta',
libraryTarget: 'window'
},
mode: 'development',
performance: {
hints: false
},
stats: 'errors-only',
resolve: {
fallback: {
"crypto": false,
},
},
};

1
apps/cic-notify Submodule

Submodule apps/cic-notify added at ba8e5989fa

View File

@@ -1,4 +0,0 @@
[AFRICASTALKING]
api_username = foo
api_key = bar
api_sender_id = baz

View File

@@ -1,3 +0,0 @@
[celery]
broker_url = redis://
result_url = redis://

View File

@@ -1,10 +0,0 @@
[DATABASE]
user = postgres
password =
host = localhost
port = 5432
name = /tmp/cic-notify.db
#engine = postgresql
#driver = psycopg2
engine = sqlite
driver = pysqlite

View File

@@ -1,4 +0,0 @@
[TASKS]
africastalking = cic_notify.tasks.sms.africastalking
db = cic_notify.tasks.sms.db
log = cic_notify.tasks.sms.log

Some files were not shown because too many files have changed in this diff Show More