Command-Line Interface
The primary use-case for the renelick command-line tool is to let users
interact with data, tasks and events. It is designed as a thin layer on top of
the API bindings which makes it well suited for debugging and prototyping. It
also includes features to manage user accounts, credentials, storage and
general API administration.
The classic --help option can be used to provide a short description of what
each command does. For example:
$ renelick --help Usage: python -m renelick.client.main [OPTIONS] COMMAND [ARGS]... Renelick command line ╭─ Options ────────────────────────────────────────────────────────────────────╮ │ --api TEXT API name as per settings.toml │ │ [default: None] │ │ --user TEXT Username for API authentication │ │ [default: None] │ │ --install-completion Install completion for the current shell. │ │ --show-completion Show completion for the current shell, to │ │ copy it or customize the installation. │ │ --help Show this message and exit. │ ╰──────────────────────────────────────────────────────────────────────────────╯ ╭─ Commands ───────────────────────────────────────────────────────────────────╮ │ hello Query the API to get its version │ │ login Open a user session │ │ whoami Show current user information │ │ version Print the Python package version │ │ api API instance settings │ │ event Operations on Pub/Sub events │ │ node Operations on data nodes │ │ service Run generic services │ │ storage File storage operations │ │ task Low-level operations on tasks │ │ user User accounts management │ ╰──────────────────────────────────────────────────────────────────────────────╯
We'll go through each one of them further down. Before we start, let's consider some general concepts that apply throughout the CLI syntax.
Global options
Many commands require an open session with an API instance. As such, the two options below are defined globally and need to be provided before a subcommand:
--apito specify the name of the API instance as persettings.toml--userto specify the username for the current session
Default API URL
If no particular API instance is specified then the local Docker one will
be used by default: http://172.17.0.1:8000.
hello command with --api=local
To avoid having to provide them manually in every command, it's common practice
to set default values in settings.toml under the [default] section. See
the example below with some typical settings values.
Default section in TOML settings
Then the hello command becomes:
It's also possible to manage API Instance Settings
with the renelick api command and select a default one.
Fields with key-value pairs
Some commands can accept arbitrary data fields. These follows a particular
syntax with key-value pairs and an optional value modifier. The simplest
syntax for a field is key=value. A number of rules apply to the key
format:
- alphanumeric characters are accepted:
atoz,AtoZand0to9 - dash
-and underscore_are accepted - double dash
__are not accepted as it's reserved for modifiers - dot
.is used for addressing fields inside objects i.e.data.key=value - no other characters are allowed for keys
There are no restrictions on the value which can contain anything including new lines, provided they're wrapped in double quotes so that the shell can parse it as a single string. Here are a few valid examples:
Nerd's corner: the key=value regular expression is
^([a-zA-Z_0-9\-\.]+)=(.*)
A number of modifiers are also available to construct values other than plain
strings. The syntax then becomes key__modifier=value. For example, the
simple modifier int will convert the value to an actual integer. Without the
modifier, the value would remain a plain string.
The int modifier
The available modifiers are:
int: convert value to an integerfloat: convert value to a floating point numbernull: returnnullregardless of the value which may be empty e.g.foo__null=bool: convert value to a booleanfile: load the contents of the file at the provided path into the valueb64: load the contents of the file and Base64-encode it into the valuejson: load the contents of the JSON file into the value as an objectyaml: load the contents of the YAML file into the value as an object
Another important thing to note is that fields are processed in the provided
order. So when loading an object with json or yaml, any subsequent fields
may be applied on top using the dotted syntax. For example, with a
foo-bar.json file containing {"foo": "bar"}, here's what the result would
be with additional fields specified on the command line:
Modifiers on top of JSON data
Finally, if the key doesn't contain any field name when loading from JSON or YAML then the object gets added at the root:
API field operators
Some commands may involve searching for database objects that match certain
criteria or filtering the result in a particular way. These make use of API
comparison operators which are processed on the server side. They follow the
same key__operator=value syntax as key-value fields but are used differently.
For example, to find data nodes with a numerical field greater than a
particular value:
The available operators are:
lt: lower thanlte: lower than or equalgt: greater thangtegreater than or equalne: not equalregex: matches the provided regular expression
An additional special-case operator is sort which can be used to sort the
results returned by the API using the key__sort=direction syntax:
The direction can be any of the following:
- increasing order:
1,up,asc - decreasing order:
-1,down,dsc,desc
Interactive shell
When running the command line with no arguments, an interactive shell is
opened. This allows running all the commands defined below without typing
renelick and also keeps the client code initialised to provide faster
interaction. It keeps a history of commands and Ctrl-R can be used to search
for previous commands. Regular shell commands can't be run though, so data
files need to be managed via a regular shell.
The commands to set the default API and username cause the TOML settings to be
reloaded, so it's possible to dynamically switch without restarting the shell.
The prompt is currently set as [username@api] to let the user know how it's
interacting with the API.
Interactive shell session
$ renelick
Renelick shell 0.20.0
[admin@local] whoami
Authenticating with persistent API key
Connecting to http://172.17.0.1:8000
User profile:
id 684d54f2ca5fca759d30aeaa
username admin
email admin@example.com
full_name
is_superuser True
is_verified True
[admin@local] api select gtucker.io
[admin@gtucker.io] user select gtucker
[gtucker@gtucker.io] whoami
Authenticating with persistent API key
Connecting to https://renelick.gtucker.io
User profile:
id 681dd2ff409e63a6dfd1f212
username gtucker
email gtucker@example.comm
full_name Guillaume Tucker
is_superuser False
is_verified True
[gtucker@gtucker.io] node add data/example-node.json
68e8f025ea5c4cb77d8f67cc
[gtucker@gtucker.io] node find name=example
[
{
"id": "68e8f025ea5c4cb77d8f67cc",
"name": "example",
"lineage": [
"68e8f025ea5c4cb77d8f67cc"
],
"path": [
"example"
],
"created": "2025-10-10T11:38:13.401000",
"owner": {
"email": "gtucker@example.com",
"username": "gtucker",
"full_name": "Guillaume Tucker"
},
"parent": null,
"kind": "node",
"data": {
"foo": "bar",
"value": 123
},
"task": null
}
]
Top-level commands
version
Show the current Renelick Python package version.
hello
Connect to the API and get the greetings message with the version string. It's
especially useful to check the API is reachable and the settings are valid.
This is an alias for api hello.
login
Open a session with an API instance. This is an alias for user
login.
whoami
Show information about the user currently logged in. When using the JWT
authentication method, it will show the time left before the token expires.
This is an alias for user whoami.
API instance settings
The TOML user settings may contain multiple entries for various API instances.
There is typically a local development one as per the Quick-Start
Guide. Then a number of public ones can be added with their own
URLs and other details such as the request timeout duration. The current API
can be selected by making it the default one when not providing the --api
option. The api subcommand makes it easy to add, remove, update and select
API instances without editing the TOML file by hand.
api hello
Query an API to get its revision.
API instance check
api add
Add an API instance entry to the TOML settings. The name and URL are required, the API version and query timeout are optional.
api remove
Remove an API instance entry from the TOML settings. Only the API name is required.
Please note that removing the current instance will cause an error when trying
to use it, so another one will need to be selected using api
select first.
api update
Update any attribute of an existing API instance in the TOML settings.
api select
Set a given API instance as the default one. Only the name is required and the entry needs to be present in the TOML settings.
api current
Show the name and URL of the currently selected API instance.
api list
Show a list of all the API instances found in the TOML settings with their names and URLs.
api info
Show the TOML settings for either the current API instance or any given one if a name is provided. This does not interact with the actual API servers, it only looks at the TOML settings.
User management
The user subcommand is to manage a user's personal detail and credentials.
Some commands or options require "superuser" status or admin rights, typically
when dealing with other users' accounts.
user register
Register a new user with a given email address and username. This can be done by the users themselves on public API instances or local development setups.
Verifying a user
Newly registered users aren't verified yet, please see the
user verify command below. Alternatively, this can be
done by a system administrator via the renelick-admin command as per the
Quick-Start guide.
On private instances, registering a new user requires admin rights. As such,
there needs to be an open admin session for the username provided via the
--admin option. This feature is enabled on the API side via the
ADMIN_REGISTER environment variable.
Registering a user on a private instance
$ renelick login admin
Password for user admin@example.com: <enter password interactively>
Storing jwt credential for user admin
$ renelick register someone@example.com someone --admin=admin
Password:
{
"username": "someone",
"id": "67458a24c7d88c42d00a8a44",
"email": "someone@example.com",
"full_name": null,
"is_active": true,
"is_superuser": false,
"is_verified": false
}
If no API instance has been specified, the default values for the local
development one will be used and a [api.default] entry will be automatically
added to the TOML settings file for it.
user verify
Users need to have their email address verified in order to access the parts of
the API that require authentication. Verification needs to be done with new
user accounts and whenever their email address is changed. The prerequisite is
for the API server to have SMTP settings defined in the environment so that it
can send emails. Then simply run the user verify command and copy the token
received by email.
User email verification
$ renelick whoami
User profile:
id 6893aab32320de4493944d3d
username bob
email bob@example.com
full_name Bob Something
is_superuser False
is_verified False
$ renelick user verify
Please check your emails for bob@example.com
Token: <copy token here>
$ renelick whoami | grep is_verified
is_verified True
It's also possible to verify an email address via the web frontend by providing
the --web-url argument as this is still an experimental feature. For
example, when running a local instance with the frontend-dev service this
would be --web-url=http://localhost:8080. The verification email then
contains a link rather than a plain-text token.
user login
Once registered, a user can login using this command. It will save a
credential locally in ~/.config/renelick/credentials.json for a given
username and API. There are two kinds of credentials which can be selected
with the --method option: JWT for self-expiring tokens and API keys for
persistent sessions. Typically, interactive sessions should rely on JWT while
API keys are better suited for automated services.
The login command takes an optional argument which can be either a user's
email address or a username which will be used to look up the email address in
the TOML settings. After a successful login attempt, the user's email address
will be automatically stored in the TOML settings so that subsequent logins can
be done with the username.
Login with email address and username
First login using the email address:
$ renelick login bob@example.com
Password for user bob@example.com: <enter password interactively>
Storing jwt credential for user bob
Storing email address: bob@example.com
Then subsequent logins can be done with the username:
user select
Change the default user to the provided username. This then gets saved in the
TOML settings, similarly to api select.
user current
Show the current default username as per the TOML settings, similarly to api
current.
user whoami
Retrieve the current user profile and show it. This can also be used to check the current authentication method and API being used.
user update
Update any of the fields for the current user profile. Currently, only the email, username and full name can be edited this way. If the command is run as superuser, it's also possible to edit any arbitrary user profile by providing its UID.
user get
Similarly to user whoami, this retrieves the current user database entry and
prints it in plain JSON. If run by an admin user, a UID can be provided to get
the profile of any arbitrary user instead.
user get-all
Retrieve all the users from the database in JSON format. This can only be done by admin users.
user cred add
Add an arbitrary credential interactively. While the login command will
automatically save API crendential entries, this command can be used to store
other kinds of credentials which may be required by applications such as
third-party API keys.
Credentials can't be overwritten
Credentials can be added or deleted but not updated in-place, to avoid accidentally overwriting an existing one. As such, replacing credentials requires them to be explicitly deleted first and then added again with the new value.
user cred delete
Delete an arbitrary credential, following the same syntax as user cred add.
user cred export
Dump the credentials of the current user in plain JSON either into a particular
file or stdout if the output path is -. This can then be used to import the
credentials again in another environment, for example when saving an API key as
a secret for a service running in Kubernetes or any kind of client application
in an automated system.
user cred list
List the names of the credentials associated with the user. This is a simple way of discovering the available credentials without exposing their values.
Data nodes
Node manipulation commands can be very useful during prototyping and for
applications that rely on shell scripts rather than the API bindings. They can
be easily combined together via intermediate JSON data to enable advanced
features with a simple syntax. Before going through the details of each
command, here's an example which generates some node data with node make and
adds it to the database directly with node add:
Data node manipulation
Create a node directly on the command line:
Retrieve the node using its id:
{
"id": "684579cd31fb74700e0d422d",
"name": "Hello",
"lineage": [
"684579cd31fb74700e0d422d"
],
"path": [
"Hello"
],
"created": "2025-06-08T11:53:49.549000",
"owner": "gtucker",
"parent": null,
"artifacts": {},
"kind": "node",
"data": {
"value": 123
},
"task": null
}
Find nodes using arbitrary fields:
node make
Make a temporary data node object and print it as JSON. This doesn't add the
node to the database and doesn't interact with the API at all. The output can
then be used with node add to actually send it and add a node. The data is
provided as key/value fields.
node add
Add a new node object to the database. The node data is provided in JSON
format, either in a file or via stdin if the file path is -. The command
then prints the id of the new node.
node make-batch
As an experimental feature, this is mainly useful during development to create
arbitrary datasets and simulate what real applications would do. It works
similarly to node make except it generates a list of trees of nodes as a
"batch" which can then be sent to the API using node add-batch. It takes a
YAML file with Jinja2 templating
enabled to produce the JSON data with variables provided on the command line.
Making a batch of nodes
- name: {{ ''|renelick.strftime('Tree %Y-%m-%d %H:%M:%S') }}
kind: root
data:
value: 123
nodes:
{% for branch_id in range(branches|default(2)) %}
- name: {{ branch_id }}-{{ ['foo', 'bar', 'baz']|random() }}
kind: branch
data:
some-int: {{ range(-10000, 10000)|random() }}
some-float: {{ range(-1000, 1000)|random() / range(1, 100)|random() }}
obj:
attr: {{ ['abc', 'def', 'xyz', 'uvw']|random() }}
yes-or-no: {{ [true, false]|random() }}
now: "{{ ''|renelick.strftime }}"
nodes:
{% for leaf_id in range(leaves|default(2)) %}
- name: {{ leaf_id }}-{{ ['ding', 'dang', 'dong']|random() }}
kind: leaf
{% endfor %}
{% endfor %}
See node add-batch below to send the resulting JSON data.
node add-batch
Add a batch of data nodes to the database. A batch is a list of trees, and
trees are a hierarchy of nodes with root and branches attributes. Just
like node add, the JSON input can be in a file or on stdin. For example,
with two separate trees and some arbitrary data:
Sample batch JSON
[
{"root": {"name": "node-100"}},
{
"root": {
"name": "node-200",
"data": {"some-int": 1234,"some-float": 3.14, "some-str": "foobar"}
},
"branches": [
{
"root": {
"name": "node-210",
"data": {
"Life and the Universe and everything": 42,
"Question": ''
}
}
},
{
"root": {"name": "node-220"},
"branches": [
{"root": {"name": "node-221", "data": {"leaf": true}}},
{"root": {"name": "node-222", "data": {}}}
]
}
]
}
]
node get
Retrieve a single data node using a provided id.
node find
Find data nodes matching the provided fields, optionally with API
operators. This will return a JSON list of objects
which can be managed using the pagination options --offset and --limit.
node count
Count the number of nodes matching the provided fields following the same
syntax as node find.
node delete
Delete a node by moving it to garbage. Only the owner of the node or a
superuser can delete a node. It may be restored later using the node restore
command if it hasn't been permanently deleted in the underlying database.
Deleting nodes is useful during development to avoid accumulating stale data. It is also something to consider for production deployments to keep the total storage utilisation within limits. Typically, a copy of some old nodes would first be made for posterity in an archive database or just plain JSON files before deleting them via the API.
node restore
Similarly to node delete, this is to restore a node from garbage and make it
available again. The original node object id is preserved so this is useful if
a node was deleted by accident and caused some unresolved references in child
nodes. As with deleting, only the owner of the node or a superuser can restore
it from garbage.
Restoring nodes should only be considered as a last resort option rather than something to rely on in applications. There's no guarantee that the deleted node will still be available in garbage as the intention is to have them permanently deleted after a while.
node update-kinds
This is an experimental feature to enable node data schema validation. The
only schema defined at the moment is artifact-v1 which can be used for nodes
containing artifact information. This command will store the schema in the
database if it doesn't already exist. More documentation will be provided once
this feature has stabilised with server-side schema validation enabled.
Events
event send
Send an event to the given pub/sub channel with the provided
fields. This will then print the event's UUID
for future reference or the whole CloudEvent data as JSON if the --verbose
option is enabled.
event recv
Receive one event from the given pub/sub channel and print it as JSON. If the
--raw option is enabled, the whole CloudEvent is printed as-is. Otherwise,
only the parsed event data is printed.
It's important to note that this command will unsubscribe from the pub/sub
channel and exit after receiving and printing the event. As such it is not
suitable for receiving theall events continuously sent to a channel. See
service monitor for this kind of use-case.
Tasks
task run
Submit a task to be run by a given Scheduler with arbitrary fields. The task's UUID is then printed for future reference.
task get
Get a task object using its UUID and print it as JSON
Like events, tasks are ephemeral objects. If a task has completed or was aborted, it may not be reachable any more through the API.
Finding tasks embedded in nodes
Some data nodes may keep a persistent copy of the task that created them. To find the nodes that contain a particular task using its UUID:
task complete
Manually mark a task as complete.
This is usually done automatically by the task itself or the Runtime implementation that started it. The command-line approach is mostly only useful for debugging or prototyping a workflow.
task abort
Manually mark a task as aborted.
This usually happens automatically on the server side when reaching the task's
timeout. As with task complete, this command is primarily intended for
development purposes.
Services
service monitor
Run a pub/sub monitor to log all incoming events for the given channels.
Standard ones such as node and task get specific formatting, other channels
have a more generic default one.
service orchestrator
Run an Orchestrator service.
The current implementation can only deal with specific tasks, the aim is then to have YAML configuration to describe which tasks to schedule following some particular event criteria.
service scheduler
Run a Scheduler service.
As for the Orchestrator service, the default Runtime inline provides a very a
specific example to run tasks directly from Python methods. The default
scheduler runs these methods in async routines.
File storage
storage upload
Upload a file to storage from a provided path. This command currently only
supports the SSH type of storage uploads to keep things simple, as included
with the standard docker compose deployment. However, the Storage class is
an abstract one which can have concrete implementations for any arbitrary
storage provider.
storage get-artifact
For a given artifact node id, download the corresponding file from storage.
This is equivalent to manually getting the node data, finding the URL inside
and then using e.g. curl to download it. The node with the given id needs to
be of the artifact-v1 kind.