cic-staff-installer/README.md

135 lines
4.3 KiB
Markdown

# CIC STAFF CLIENT
Services installer temporarily for internal use by GE.
## Dependencies
The os-level dependencies below must be met both at install and run time.
The version numbers are the version numbers used at implmentation time. It may very well work with earlier versions of the components, as long as they are not too old.
- systemd (249)
- gcc (11.1.0)
- git (2.33.0)
- python (>= 3.9)
- pip (20.3.4)
- sqlite (3.36.0)
For the optional bloxberg node build (`INSTALL_EVM=bloxberg`), additionally these dependencies must be met, aswell as a working internet connection:
- rustup (1.24.3)
- clang (12.0.1)
- cmake (3.21.2)
## Installation settings
The examples below assume working directory of the cic-staff-installer repository root.
<!--The `CIC_ROOT_URL` environment variable points to a location where top-level configuration settings can be found. These are individual files named after the environment variables they are setting, whose contents are signed by the trusted key defined in `CIC_SETUP_TRUSTED_FINGERPRINT`.-->
### cic-stack docker-compose cluster settings
To use against the cic-stack docker-compose local cluster:
```
export CIC_ROOT_URL=file://`pwd`/var
```
If you want to select python packages from a specific repository only, also add:
```
export PIP_INDEX_URL=<url>
export PIP_EXTRA_INDEX_URL=<url>
```
If you want to build the bloxberg mode executable (be warned, that's a long wait), add:
```
export INSTALL_EVM=bloxberg
```
## Installation
To proceed with the installation, enter:
```
bash setup.sh
```
During the installation you will be prompted to enter your name, email and as password for the gnupg setup.
The gnupg key will both be used to authenticate using HTTP HOBA when necessary, aswell as encrypt local cached content.
## Running the services
```
systemctl --user start cic-cache-tracker
systemctl --user start cic-cache-server
```
Verify that they are running
```
systemctl --user status cic-cache-tracker
systemctl --user status cic-cache-server
```
Please note that although installing the bloxberg node is an optional feature of the installer, there is currently no supporting files for running the bloxberg node.
To run the bloxberg node manually:
```
/home/lash/.local/bin/parity -c /home/cic/.local/share/io.parity.ethereum/bloxberg/bootnode.toml
```
## Using `clicada`
It should now be possible to run `clicada` without any extra settings needed.
Please refer to the documentation on `clicada` for details on how to use the tool.
## Files and directories
The installation produces a number of files in the user home directory, some of which may be edited directly to change behavior of the program.
All paths relative to `$HOME`
| location | description | editable |
|-|-|-|
| `.config/cic/cache/*.ini` | Configuration file(s) for the `cic-cache-*` services | yes |
| `.config/cic/cache/*.ini` | Configuration file(s) for the `cic-cache-*` services | yes |
| `.config/cic/clicada/*.ini` | Configuration file(s) for the `clicada` tool | yes |
| `.config/cic/staff-client/key_fingerprint` | gnupg key fingerprint for key used by clicada for authentication | no |
| `.config/cic/staff-client/user.asc` | gnupg public key used by cicada for authentication | no |
| `.config/cic/staff-client/.gnupg` | gnupg homedir used by cicada for authentication | no |
| `.config/systemd/user/cic-cache-*.service` | systemd user service definition file for `cic-cache-*` services | yes, with `systemctl --user edit <service>` |
| `.config/environment.d/01-cic-cache-*.conf` | environment variables for systemd user services | yes |
| `.local/share/cic/.gnupg` | gnupg homedir for holding trust keys for global cic configurations | no |
| `.local/share/cic/clicada/.secret` | A gnupg encrypted symmetric secret used to encrypt local cached content | no |
| `.local/share/io.parity.ethereum/bloxberg` | Bloxberg configurations and chain data | no |
## Installing as a different user
You may want to create a dedicated user for the installation, so as to not pollute your regular user data directories.
Since the services are run using systemd, a simple `su` or `sudo` will not be sufficient in this case.
Perhaps the simplest solution is to launch a new login shell within the systemd vm using the following command:
```
machinectl login
```
Another alternative can be to open an `ssh` session.