Skip to content

Quick-Start Guide

The steps below provide a quick-start guide to get a local instance up and running. This includes the server-side services as well as some sample client-side implementation. It is a self-contained deployment which relies entirely on Docker Compose.

Note: Special considerations for a public production deployment such as SSL encryption and load balancing are not covered in this guide, only the core services for a local development setup.

API Services

Let's start with the API server-side services as the rest of the stack depends on them. The examples provided here have been tested with Debian 12 Bookworm. Since they're in containers, only Docker needs to be installed on the host alongside a couple of tools to set things up:

$ sudo apt update
$ sudo apt install docker-compose docker.io openssl jq

Note: More recent versions of Docker which include the native docker compose v2 command can also be used, however it's not available in Debian's docker.io yet so this guide uses docker-compose v1. See the Docker documentation on how to install the latest Docker Engine on Debian and other platforms.

To start the services, first a SECRET_KEY environment variable is required to generate authentication tokens. You also need the ADMIN_REGISTER environment variable set to false to allow registering your first user without setting up an admin account. Then it's all docker-compose as usual:

$ echo SECRET_KEY=$(openssl rand -hex 32) >> .env
$ echo ADMIN_REGISTER=FALSE >> .env
$ docker-compose pull
$ docker-compose up

It should show all the logs from the containers running and eventually confirm it's ready:

renelick-api | INFO:     Starting daemon: KeepAlive
renelick-api | INFO:     Starting daemon: Timeout
renelick-api | INFO:     Application startup complete.

In another shell, check that all the services are up and running as expected:

$ curl -w"\n" http://localhost:8000
{"message":"Renelick API 0.10.0"}
$ docker-compose ps
      Name                     Command               State                         Ports
---------------------------------------------------------------------------------------------------------------
renelick-api        uvicorn renelick.api.main: ...   Up      0.0.0.0:8000->80/tcp,:::8000->80/tcp, 8000/tcp
renelick-frontend   /docker-entrypoint.sh ngin ...   Up      0.0.0.0:80->80/tcp,:::80->80/tcp
renelick-mongo      docker-entrypoint.sh --wir ...   Up      0.0.0.0:8017->27017/tcp,:::8017->27017/tcp
renelick-redis      /entrypoint.sh                   Up      0.0.0.0:8079->6379/tcp,:::8079->6379/tcp, 8001/tcp
renelick-ssh        /usr/sbin/sshd -D                Up      0.0.0.0:8022->22/tcp,:::8022->22/tcp
renelick-storage    /docker-entrypoint.sh ngin ...   Up      0.0.0.0:8002->80/tcp,:::8002->80/tcp

The API service also provides some interactive OpenAPI documentation.

Using curl is fine for simple experiments but it's not practical for doing anything useful with the API. Let's take a look at the client tools.

Command Line Interface

An all-in-one Docker image is available to run the command line tools. API tokens and other user-specific settings are not part of the image so they need to be provided via a volume. Likewise, exchanging data with the host e.g. JSON files requires a shared data directory. Here's a typical way to do it, with a handy alias to make examples easier to read:

$ mkdir -p ~/.config/renelick
$ alias renelick="\
docker run \
-it \
-v $HOME/.config/renelick:/home/renelick/.config/renelick \
-v $PWD/data:/home/renelick/data \
registry.gitlab.com/gtucker.io/renelick:main"
MacOS alternative API URL

In macOS with docker desktop, you will also need a settings.toml file to configure the URL of the server. Run the below to create one:

cat > ~/.config/renelick/settings.toml << EOF
[api.default]
url="http://host.docker.internal:8000/"
EOF

To try it out:

$ renelick hello
Connecting to http://172.17.0.1:8000
Renelick API 0.10.0

Note: When run for the first time, Docker may need to pull the image and print some progress bars etc. before running the command.

Many API operations require a user account, so let's create one:

$ renelick user register renelick-admin@gtucker.io admin
Password:
{
  "username": "admin",
  "id": "666ecd0f12e666cca5bb2936",
  "email": "renelick-admin@gtucker.io",
  "full_name": null,
  "is_active": true,
  "is_superuser": false,
  "is_verified": false
}

Note: The email address doesn't need to be a real one for a local instance, but it will need to get verified in the case of a real production deployment.

The next step is to login, which means opening a session and storing a JWT so you don't need to keep entering your password all the time:

$ renelick login admin
Password:
Storing JWT for admin

Note: The default session lifetime is 1h, then you'll need to login again. An alternative authentication method is to use a persistent API key with the --method=key option. This is mostly intended to be used by persistent services rather than interactive users.

It's usually a good idea to make the first user account a system administrator or superuser. This can be done with the dedicated renelick-admin tool which bypasses the API and authentication. We'll also set it as verified to avoid going through a real email verification. You'll need to either copy your user id from the output of the register command run earlier or use jq to get it from the output of the get-user command as shown here:

$ uid=$(renelick user get | jq -r .id)
$ docker-compose exec api renelick-admin $uid set verified
$ docker-compose exec api renelick-admin $uid set superuser

Now you can get your user account details again with the whoami command and confirm the flags are set as expected:

$ renelick whoami
  id             666ecd0f12e666cca5bb2936
  username       admin
  email          renelick-admin@gtucker.io
  full_name
  is_superuser   True
  is_verified    True

Data Nodes

What can we do now? The building block for all Renelick data is the Node, so let's start with this. To create a trivial node and then get it back, first create a data/hello.json file:

{
    "name": "hello",
    "data": {
        "greetings": "Hello!"
    }
}

Then to send it and get it back again using its id:

$ node_id=$(renelick add-node data/hello.json)
$ renelick get-node --indent=2 $node_id
{
  "id": "66f1884911f0918098b802fa",
  "name": "hello",
  "parent": null,
  "artifacts": {},
  "kind": "node",
  "data": {
    "greetings": "Hello!"
  },
  "task": null,
  "path": [
    "hello"
  ],
  "created": "2024-09-23T15:24:57.401000",
  "owner": "admin"
}

Storage

File storage is not managed by the API itself, only URLs are stored in the database. The actual resources can be stored anywhere as long as they're directly reachable using those URLs. For a local development setup, a self-contained storage solution using SSH for uploads and HTTP for downloads is provided. To use it, first create an SSH key on your host:

$ sudo apt install openssh-client
$ ssh-keygen -f ~/.ssh/id_rsa_renelick
$ cat ~/.ssh/id_rsa_renelick.pub >> docker/ssh/user-data/authorized_keys

Then to check the key is set up correctly, you should be able to run arbitrary commands over SSH in the ssh container exposed on port 8022 (answer 'yes' when the SSH client warns about the unknown authenticity of the host):

$ ssh -i ~/.ssh/id_rsa_renelick -p 8022 renelick@localhost whoami
renelick

As a handy way to avoid having to specify the key and port number every time, you can add this section in your ~/.ssh/config file which relies on the Docker bridge standard IP address 172.17.0.1 rather than just localhost:

Host 172.17.0.1
    IdentityFile ~/.ssh/id_rsa_renelick
    User renelick
    Port 8022

Now you should be able to simply run this without the -i and -p options:

$ ssh 172.17.0.1 whoami
renelick

Then to manually upload and retrieve some files, for example now.txt with the current date and time:

$ date > now.txt
$ cat now.txt
Wed  1 May 15:58:09 CEST 2024
$ scp now.txt 172.17.0.1:/home/renelick/data/now.txt
$ curl http://localhost:8002/now.txt
Wed  1 May 15:58:09 CEST 2024

Development

The steps above already make it possible to develop applications with a local instance. This section goes a bit further, to enable making changes in the Renelick stack itself and run client-side code in development containers.

API services

On the API side, the docker-compose setup already mounts the local source code inside the containers. Editing the files under renelick/api will automatically cause the API service to be automatically restarted with the live version. So nothing special needs to be done there to start developing, the Docker images only need to be rebuilt after changing the dependencies in pyproject.toml or when editing Dockerfile itself:

$ docker-compose down --remove-orphans
$ docker-compose build
$ docker-compose up

Clients

Then on the client side, a set of aliases are available to facilitate running commands in a Docker container while having access to the local directories:

$ source scripts/devenv

Sample commands using the local source checkout:
$ rk hello
$ rki register email@xyz.com username
$ rksh pycodestyle --verbose renelick/api
$ rkish bash

The rk alias runs the command line tool from the local version of renelick.client.main but within Docker to avoid having to install all the dependencies on the host. The rki alias is an interactive variant for some particular commands such as rki register which gets a password from user input. Docker introduces DOS line endings in interactive mode (with --tty), which can cause some issues when parsing JSON output with jq or any tool that expects Unix \n line endings.

Likewise, rksh and rkish can be used to run arbitrary commands within the Docker image, either non-interactively or interactively. These four aliases cover all the use-cases - except when formatted output is required from an interactive command so the CLI should try and avoid this scenario.

Similarly to the API services, the Docker image used in these aliases doesn't need to be rebuilt except when making changes to the dependencies or the Docker image setup itself. The source code directories are mounted as volumes so any changes will be made directly available inside the containers.

If rebuilding the all-in-one renelick Docker image with all the dependencies enabled [cli,api,dev] is required, this handy script is provided:

$ scripts/docker-build

Frontend

As the web frontend is still very new, there aren't very detailed instructions to set up a development environment yet. The image typically needs to be rebuilt every time some changes are made in the source code with docker-compose build frontend. See the frontend README.md for now with more details on how to run a live server as per the standard FastAPI full-stack template.