website/content/baguette/our-tooling/index.md

37 KiB
Raw Blame History

+++ title = "BaguetteOS: custom tools"

paginate_by = 5 +++

Simple ideas, simple implementations.
Keep reminding to yourself while reading this section that our tools are implemented within just a few hundred lines of code (up to 1500 lines for service and libipc).
Also, they can easily run on other systems: nothing here is OS specific.

Feel free to provide a feedback.

A few pointers:

Spec files: our declarative format

Before presenting our tools, here is a file format named spec that we use when relevant. It is declarative: we do not write instructions on how to do things (copy this file here, download this, etc.). Instead, we describe what something is: the URL of the project is https://example.com/xxx, for example. The spec format is only composed of variables, lists, code blocks and sections. Here is a quick example.

# This is a comment

# This is a simple variable instanciation
my-variable: value

# This is an inlined list
my-list: a, b, c

# This is a multiline list
my-list:
  - a
  - b
  - c

# Interpolation is performed with %{variable}
url: https://example.com/
software-url: %{url}/my-software-v0.2.tar.gz


# "@an-example-of-code-block" is a Code block.
@an-example-of-code-block
	# Code blocks allow inserting simple shell scripting in the configuration file.
	curl %{url} -o- | \
	sed "s/_my-software.tar.gz//" | \
	tail -1
	# The scope is the indentation.

# Not the same indentation: not in the block anymore.

# "%an-example-of-section" is a Section, allowing to add metadata to a "target".
# "the-target" is the target, receiving metadata.
%an-example-of-section the-target
	# Within a section, variables are declared in the same way as outside the block.
	name: The name of the target.
	list-of-random-info-about-the-target:
	  - a
	  - b
	  - c

Variables and lists look like YAML. code blocks allow to insert shell scripting, because sometimes we do want to tell instructions. Finally, sections allow to add metadata to a target (a file or directory).

Note for developers: this file format can be read line-by-line, which is very simple to implement.

Next, the usage in practice: packaging, service.

See the dedicated page for a more in-depth explanation of the specification file format.

Back to top


Package: our package manager

Package covers the basics: install, remove, search and provide information about a package. Package can create minimal rootfs, to bootstrap BaguetteOS on another system or to create test environments for example.

Package provides slotting by default: no need for custom environments for each software.

Package format is a simple tar archive containing a meta.spec file describing all meta-data about the package (hash, manifest, etc.) and files.tar.xz with the files to install. The database of the package manager contains:

  • world, the file containing the list of available packages
  • installed, the list of installed packages
  • [package-name]/[slot]/manifest, the manifest of the installed package
  • [package-name]/[slot]/meta.spec, the meta-data of the installed package

The configuration of Package is made of:

  • a list of repositories
  • authorized package signing keys
  • packaging variables (cflags, makeflags, and so on)

Finally, Package can easily be expanded, as it is only a few hundred lines of Crystal code.

Back to top


Packaging: the way to create packages for BaguetteOS

Any OS needs a way to create packages to share software, either by sharing sources that need to be compiled or by sharing pre-compiled binaries. As BaguetteOS is designed to provide quickly usable systems, we choose to provide binaries. Packaging uses simple, declarative recipe files with the spec format as we saw earlier. Packaging has a few advantages compared to most used packaging tools:

  • declarative recipes abstract OS specifics
    The same recipe may work for many native packaging systems (on many OSs), as long as packagers provide the right target running dependencies. This only requires to provide a back-end for the package format of the target package manager (a back-end to write .deb files, for instance).
  • packages are split automatically
    We need to separate binaries, libraries and documentation in different packages, so we can only install what's needed. Slow and testing systems only require strict minimum.
  • binaries and libraries are stripped by default
    By default, a running system does not require debug symbols in its binaries.
  • recipe readability is great
    A few variable declarations are better than a dozen lines of code.
  • trivial shell script patterns become automated
    Autotools and cmake build systems are auto-detected; packagers should only provide specific parameters for each project.
  • tooling may change, recipes won't
    Everybody wants to change its build system? (Besides possibly broken tools and possible workarounds,) this is not a problem for the recipe, just for packaging.
  • packages are hashed and signed by default, no extra-tooling
    You need your own set of cryptographic keys, which is created at first use.
  • repositories are created automatically at first compilation, helping people maintaining their own set of tools
    Change the first prefix in your packaging configuration, compile your first package and you have your repository. It's that simple.
shell script is lava

Packaging's build environments

Packaging uses package to create build environments and to test packages before validation. It works as follow:

  1. a /tmp/packaging/build-UUID/ directory is created
  2. sources are downloaded, extracted then compiled
    Recipes and packaging host configuration may change parameters to the build: adding steps before compilation, changing configure arguments, changing the default slotting (/usr/baguette), etc.
  3. compiled applications and libraries are put in /tmp/packaging/build-UUID/root which is used to create the final package

Build environments are low-cost since we hardlink binaries into the building rootfs, which is inspired by the proot tool on OpenBSD. Packaging only installs the minimal set of binaries required by the package to build. Besides the target application, a building environment size is only a few kilobytes.

Packaging configuration common configuration for your packages

Packaging may be configured globally for your system with the file /etc/packaging.conf which contains the following:

# Configuration file for `packaging`

# where to send built packages
packages-directory: /usr/local/pkg/
# where to download sources
sources-directory:  /usr/local/src/

# the slot we want for our packages
slotting: /usr/baguette

# prefixes for `packaging` running environment and child processes
# = where to search for binaries and libraries for the build
prefixes:
	- /usr/baguette/
	- /

# list of environment variables we want to have when building
environment:
	# we may choose another compiler, provide some CFLAGS, etc.
	- CC: clang
	- CFLAGS: -Os -Wall

	# next three parameters have special meaning
	# to provide parameters to the `./configure` script when building
	- configure: --disable-nls --without-gettext

	# to provide parameters to the `make` command
	- make:

	# to provide parameters to the final `make install` command
	- make install:

# wanna build for another system? not a problem, just add the back-end (we currently have `apk` and `package`)
package-manager: package

That's it. You know all about packaging configuration. These parameters may be overridden by recipes.

Packaging recipes we need to create packages

A recipe is the way to reproduce something; here, we want to create a package, the recipe should provide all data necessary to be able to reproduce the package. This means at least having a name for the software and a version (they appear in the package name) and sources (software code). Let's take an example.

# GNU Hello program
name: hello    # software name
version: 2.10  # software version
release: 2     # recipe release: incremented when the recipe changes for the current version of the software

# the description will appear in the package information we can retrieve with `package`
description: "This is the GNU Hello program."

# sources may be multiple: you may want to add arbitrary files along the tarball (patches for example)
sources: https://ftp.gnu.org/gnu/hello/hello-%{version}.tar.gz

# we provide running dependencies: the program needs these to `run`
dependencies:
  - gettext

# we provide build dependencies: these programs are needed to compile our recipe, not to run the application
build-dependencies:
  - make

# if we want to add or override compilation options
options:
  - configure: --disable-nls

# feels like déjà vu, right?
# "watch" code block helps to check if the recipe covers the last software version
@watch
	curl 'https://ftp.gnu.org/gnu/hello/' -o- 2>/dev/null |
	sed -n "/hello-.*\.tar\.gz/{s/\.tar\.gz.*//;s/.*hello-//;p}" |
	tail -1

This was a real example, and not the simplest one. Most of our recipes currently look like this:

name: dhcpcd
version: 8.0.3
sources: https://roy.marples.name/downloads/dhcpcd/dhcpcd-%{version}.tar.xz
description: "dhcp server"

That's it. Yes, we can add a few meta-data, but this is a working recipe. Configuration, compilation and packaging are done without needing anything else. The only required parameters are name, version and sources.

Sometimes, developers are assholes and force you to fix their build system. When manual operations really are required, you can use @configure, @build and @install code blocks. Let's see an example with a special snowflake… like perl which has a non-standard build system.

# in our example, we want to configure the build
@configure
	# we set some script execution parameters: exit at any error, print everything you do
	set -e -x

	# we currently are in `/tmp/packaging/build-UUID/`
	# now we enter the directory created by the tarball extraction, where the build occurs
	cd perl-%{version} # we can interpolate variables with this: %{}

	# Perl needs a few environment variables for configuration
	BUILD_ZLIB=0
	BUILD_BZIP2=0
	BZIP2_LIB=/usr/lib
	BZIP2_INCLUDE=/usr/include
	export BUILD_ZLIB BUILD_BZIP2 BZIP2_LIB BZIP2_INCLUDE

	# now we can launch the `configure` script…
	# … but wait kids, that's not `configure`, that is `Configure` (see the capitalized C?)
	# this is a totally different and very special script, but you should love it anyway
	./Configure -des -Dcccdlflags='-fPIC' \
		-Dcccdlflags='-fPIC' \
		-Dccdlflags='-rdynamic' \
		# ... some long and uninteresting list of very specific parameters because we are Perl, we are historic and stuff

Now you know how to deal with @configure, @build and @install: these are code blocks allowing you to fix this kind of problems. Non standard build operations happen from time to time, and code blocks help you overcome this. If a lot of packages have the same workarounds, we might add detection and integration into packaging, so that only specifics are kept into recipes.

If you want to investigate a bit more, you can check our recipe repository. Feel free to improve these recipes with meta-data, @watch code blocks… contributions are welcome.

Future of packaging let's be even more declarative

As we saw, packaging allows maintainers to create very simple and readable recipes. Sometimes we have to confront ourselves to poorly designed build systems, but we can hack a bit. In the future, manual operations should be reduced even more by adding:

  • a few other parameters to the environment
    This implies to check for patterns in the recipes and to provide workarounds.
  • new code blocks before the configuration, build and install steps
    This would allow performing a few hacks in the directory (a quick sed in a file for instance) while still keeping automatic operations.

Another possible addition to packaging could be to take cross-OS recipe as first-class citizen. It was part of the design process of spec files: sections can be used to discriminate information based on the target OS. An example:

%OS Ubuntu-20.04
	dependencies: libxxx
%OS Debian-12
	dependencies: lib-with-specific-naming-standard-for-reasons

This way, bootstrapping an application and providing it to any system with its own tools could be easy. It's actually what we did during the Baguette's bootstrap. Providing universal recipes could even become a game for patient system administrators.

Back to top


Service: service management. not just kill or start/stop/status wrapper

Service management often comes with:

  • default configuration files, users should learn how to configure them and do it manually
  • default user and group, so two instances may have security-involved issues
  • a single possible instance, otherwise the configuration has to be heavily changed
  • root-only management, simple users rarely run their own services (except on systemd, kudos for once)
  • no domain management

These shortcomings imply manual configuration, scripting to manage databases and users, specific tooling for each database and service: this is heavy machinery. To overcome drawbacks of having simplistic tools, sys-admins developed all kind of monstrous architectures.

  • LXC it's basically a chroot with network and software limits
    LXC is kinda reasonable, and may be useful in some cases, but it provides no simple way of configuring our services.
  • Qemu + KVM, Xen let's add software mimicking hardware's complexity to the mix, telling everyone it's for security and simplicity
    These programs make the administration simple for sys-admins: no need to configure thoroughly users, groups, etc. Everyone is root and handles its administration as (s)he wants. Qemu and alike also help big companies to have a large computing capacity which they can rent when they don't need it. At no point Qemu or Xen help getting your services up and running, and they are not made for security. Yes, running broken programs may be better within Qemu than on plain non security-oriented OS. But. This is still way less efficient than fixing the application. Running applications as simple users, compiling them with sane default options (RETGUARD for example) and providing a few syscalls (like pledge and unveil) to catch errors and most security holes is simple nowadays, let's use that.
    But, I can't do that in my company 'cause reasons.
    That's okay if you don't want to do that in your company, we are here for final users and advanced users, mostly. We don't aim at fixing everybody's problems.
  • docker I don't know how to do simple applications nor packages, so I give to you my whole dev environment
    Note: we have to admit, packaging on most OS is painful for absolutely no good reason.
  • Chef and Puppet the 500 MB running Ruby code on our virtual machines just to check for configuration updates is okay 'cause memory is cheap, right?
    We talk about the importance of security from time to time, but running a software designed by people telling it's okay not to free memory is far from being wise.
  • Ansible templating your applications… from another machine
    As Chef and Puppet, ansible provides templating for applications, this time configuration propagation is way simpler since it uses well-known, trusted and loved ssh. Still, templating is done for remote machines, as it is intended for server deployments: this is a sys-admin tool. The introduction page already talks about cloud provisionning and intra-service orchestration on the first line, telling that you really need to setup SSH keys, etc. As for many tools in IT: this is not for simple users.

Simple users:

  1. should only have to provide absolutely necessary information for their services
  2. should be able to run as many services as they want
  3. shouldn't have to learn configuration syntax for their services
  4. shouldn't be afraid of updates
  5. ... and should have BACKUPS! Where are they? We should have that by default on our systems over 20 years ago.

And advanced users should have an uncomplicated CLI tool to do that.

Let's take an example with service better than a thousand words

# We want a wordpress service, proxied by an nginx and using postgresql as DBMS
# THIS IS THE "VERBOSE" VERSION

# 1. we add an nginx
$ service add nginx
# 2. we add the database
$ service add postgresql
# 3. we add the wordpress
#    by default, it uses available http proxy and database
$ service add wordpress domain=example.com http=nginx database=postgresql
# 4. we start the wordpress
$ service start wordpress

A bit of explanation:

  1. first, we add nginx to the list of services we want on our system

  2. same thing with postgresql

  3. then we add wordpress and we pass parameters to the service configuration: domain, http and database
    domain for the domain name, http for the HTTP proxy and then the database back-end. Up to this point, nothing even started.

  4. Finally, we start the service.
    Since service knows the dependence graph, it starts other services before wordpress (which actually doesn't have any binary to run per se). The configuration of a service is made when service starts it. In this case:

    1. nginx is started as a proxy for example.com
    2. postgresql is started, its internal directories and files are created, then a user, a database and a password for wordpress are created
    3. a directory is created in /srv/root/wordpress/ and wordpress files are copied into it, then its configuration is generated

    Stopping a service also stops its dependencies, unless specified otherwise. Of course, a service is not stopped if it is required elsewhere.

Wanna see the less verbose version?

$ service add wordpress domain=example.com
$ service start wordpress

And that's it.

  1. Services have tokens.
  2. Tokens are used by default.
  3. BaguetteOS provides default services for each token.
  4. If a service is added and its dependencies aren't satisfied, we add other services.
  5. (Bonus) If a service isn't installed, we ask nicely if the user wants to install it.
    (This is in discussion.)

Here are a few functionalities service brings.

  1. uncomplicated service configuration with shared information
    Services can share:

    • passwords (which are auto-generated)
    • user names (which follow a naming convention)
    • port numbers (auto-attributed unless specified)
    • ... etc.
  2. templates configuration files are generated by templates and a few parameters
    When we want a software, for instance a blog, we want to provide the minimum informations it requires to work and that's it. When service starts a service, it verifies if its configuration file is installed and up-to-date, and creates it otherwise. Users shouldn't need to manually change the configuration. Syntax may change at any time without breaking a single service, since the configuration will smoothly be regenerated with useful information at start-up.

  3. environments
    Each service can be installed in a specific environment, such as a custom rootfs, a virtual machine, etc.

  4. automatic user and group creation
    Each service needs to run with a unique user, dedicated to it, different from other users on the system for security reasons. Let's make it the default behavior.

  5. tokens service a needs b, b needs c, when a starts, b and c are configured then started
    service knows the relations between services, and uses them to configure services through the templates. everything is smoother now

  6. automatic backup solution
    Since we know each running database, service configuration and data directories, we can backup everything. We just need a backup server to be configured. Of course, we can add a lot of parameters to have a fine-grained backup solution.

# not currently done, but will look like this
$ backup add ssh:user@example.com:/srv/backup
  1. unified way to configure the system best user experience possible
    It alleviates the need for manual configuration. The CLI tool is the same for every provided service.
    Behind the scene, it's a simple token system with configuration templating!
    No heavy machinery here, and we'll keep it that way.

  2. ... and understandable tooling output for god's sake!

$ sudo service show my-awesome-gitea-instance
Name:            my-awesome-gitea-instance
Type:            gitea
Environment:     root (prefix)
Consumes:
  - database     root/postgresql
  - http         root/nginx
Ports:
  - http:        49155

Service: a service example

A service is defined by its description in a .spec file and its template(s). Let's use gitea as an example, the git web server handling our repositories.

# `gitea.spec`

# The command which is run to start the service.
command: gitea -C . -w . -c gitea.cfg

# The tokens needed by the service. Here http is optional.
consumes: database, http?

# Does the service require a domain name to be configured?
# If so, service will require a "domain" token, such as `service add $service domain=example.com`
# This name may be used in the service configuration, or its (forward or backward) dependencies.
requires-domain: true

# The port(s) the service requires.
# By default, if a service requires a port named "http", its value will be 80.
ports: http

# `%configuration` indicates the name of the configuration template.
# For example, if the template file is `/usr/baguette/service/templates/gitea.cfg.j2`:
%configuration gitea.cfg

We currently use a bit more configuration for the database, but we will get there at some point.

Now, a quick look at the gitea configuration template. just a sample, we'll skip most, don't worry
Templating is done with Jinja2 templates: see more about Jinja2 templating.

First, let's say we created a gitea service this way:

service add gitea domain=example.com database=postgresql

Now, we want to see the template used to configure gitea.

[database]
# `service` passes data to the templates.
# `providers` is the list of services behind the tokens used.
# For example, behind "providers.database" there is a `postgresql` service.

# Gitea needs to establish a connection to its database.
# The database ports can be accessed with "database.ports".
# The port used to connect to the database is named "main" by convention.
# See the definition of the postgresql service for details.
HOST     = 127.0.0.1:{{ providers.database.ports.main }}

# Here, the template requires the database name.
# By convention, databases are named: $environment_$service_db.
# In our example, `service` creates the database "root_gitea_db", as the default environment is "root".
# `service.id` is '$environment/$service', which uniquely identifies the service.
# In our example, `service.id` is "root/gitea".
# The database name will be obtained by replacing "/" by "_" and adding "_db" to the service.id variable.
# This is performed this way:
NAME     = {{ service.id | replace("/", "_") }}_db

# By convention the user name is identical to the database name without the "_db" suffix.
USER     = {{ service.id | replace("/", "_") }}

# The `random_password` function creates a unique password the first time it is called.
# This function takes a service id in parameter.
# After that, it provides the same password each time it is called with the same service id.
# `random_password` enables sharing a password between two service configurations.
PASSWD   = {{ random_password( service.id ) }}

[repository]
# service.root indicates the service working directory: /srv/<environment>/<service>/
ROOT = {{ service.root }}/repositories

[server]
# service.domain = provided domain token value
# service.ports.http = provided ports.http token value
ROOT_URL         = http://{{ service.domain }}:{{ service.ports.http }}/

Current implementation of service

It currently works, and we just need to add more services! Every new corner case will be thoroughly investigated to provide the smoothest experience possible with BaguetteOS. We currently use it for the nginx providing this very website.

Currently, there are no fancy environments, just plain directories.

Future of service

First, we want to work on databases, to export data and get a simple backup service.

Then, we could let users manage their own services, but this is of little interest in practice since services are running under different users already.

The most useful contributions right now would be to provide new service configurations.

Back to top


Build.zsh: makefile creation. for mere mortals

A Makefile is very useful to share your project, whatever your project is. No matter if it is an application or a library, whatever the language used, etc.

Build.zsh creates a makefile from a simple declarative configuration file. A Makefile should:

  • compile and link the application
  • handle dependencies and rebuild only the updated parts of the application for incremental builds
  • allow users to rebuild any part of the project independently
  • install the application
    The default installation root directory is /usr/local but it can be changed easily (with an environment variable or through configuration).
  • create a tarball of the project
    This tarball includes the application, its man-pages, the Makefile to build it, etc.
  • have a make help everybody needs help
  • (BONUS) have colors. I like colors.

How-to generate a Makefile using Build.zsh

$ ls
project.zsh       # we need to create this file, we'll cover that in a moment
$ build.zsh -c    # -c is for colors. I like colors. Really.
$ ls
project.zsh
Makefile          # tadaaaaa you made a Makefile

You can invoke your newly created Makefile with make help to know what it can do for you. For example, it can create a tarball with make dist.

How-to make a project.zsh for build.zsh

# File `project.zsh`

package=my-application  # Name of your package.
version=2.18.3          # Version of the package (will follow the application version, probably).

# targets = all the applications to compile in the project.
targets=(my-application)

# The language of these applications is then specified.
# Here, my-application is coded in Crystal.
# `build.zsh` comes with a number of back-ends: https://git.baguette.netlib.re/Baguette/build.zsh/src/branch/master/build
type[my-application]=crystal

# The sources of the application.
sources[my-application]=src/main.cr

# The application depends on a certain number of files.
# `make` will re-compile "my-application" if one of these files is modified.
depends[my-application]="$(find src -type f | grep -v main.cr)"

# Finally, the list of the files to be included in the tarball (made through `make dist`).
# Here, the tarball will include:
#   the application binary,
#   man-pages in man/ directory,
#   this project.zsh and the generated Makefile.
dist=(my-application man/*[1-9] project.zsh Makefile)

Build.zsh: a real-life example

package=build_zsh # Name of the package.
version=0.2.1     # Version of the package.

targets=(build.zsh)     # The things to build or install.
type[build.zsh]=script  # How theyre built.

# Using a for loop to add more targets.
# In this example, were registering scripts for installation.
for i in build/*.zsh; do
	targets+=($i)
	type[$i]=script

	# Installation in a non-default directory.
	install[$i]='$(SHAREDIR)/build.zsh'

	# Targets marked as “auto” wont appear in `make help`.
	auto[$i]=true
done

# Files to add to tarballs through `make dist`.
dist=(build.zsh.in build/*.zsh project.zsh Makefile)

Another real-life example I like examples

This time, we want to build a Makefile for a C library.

package=libipc    # Name of the package.
version=0.5.1     # Version of the package.

# Our targets are the library and its documentation.
targets=(libipc man/libipc.7)

# The libipc target is a library ("for the C language" is implied).
# The `library` type automatically adds two targets:
#   `target`.so  of type `sharedlib`
#   `target`.a   of type `staticlib`
type[libipc]=library

# Sources are added by default to the tarball.
sources[libipc]=$(ls src/*.c)

depends[libipc]=$(ls src/*.h)

# Add extra compilation flags.
variables+=(CFLAGS "-Wall -Wextra -g")

# Let's add some CFLAGS, with another syntax.
cflags[libipc]="-std=c11"

# The man/libipc.7 target is a man-page generated with `scdoc`.
# `scdocman` is one of the many back-ends of build.zsh: https://git.baguette.netlib.re/Baguette/build.zsh
type[man/libipc.7]=scdocman

# Files to add in the tarball, generated with `make dist`.
dist=(libipc.so libipc.a)     # The library in both shared and static versions.
dist+=(man/*.scd man/*.[1-9]) # Manual pages (and sources).
dist+=(Makefile project.zsh)  # The generated Makefile and this project.zsh.

This example shows how to work with C libraries, but it wouldn't have changed much for an application: only the type of the target would change. This project.zsh comes from the LibIPC repository (a library we'll see soon).

What about now?

Now you do not have any good reason avoiding Makefiles for your projects!

But, I don't like Makefiles!
You like the functionalities we presented here, but you want an alternative to Makefiles? We may add some other back-ends. Stay tuned!

Back to top


LibIPC: an IPC communication library nothing new, yet it still feels fresh

We use this communication library between our services.

  1. Applications should talk to each other
  2. We need services, not libraries
    Therefore, languages are irrelevant: you can use any library in any language.

LibIPC is currently used for the administration dashboard, the web interface for the services, for a kanban and several other tools we use for collaboration. It provides a way to communicate between clients and services using simple unix sockets behind the scene.

C library with Crystal bindings (other languages coming soon)

require "ipc.cr"

server = IPC::Service.new "MyService"
server.loop do |message|
    # ...
end

That's easy to write even in plain C.

LibIPC explanation goes beyond the scope of this page… and may even deserve a whole website on its own but the tool is awesome and performances are crazy (we have to tell the world!). Just go and check!

Explain remote communications. Remote communications are transparent.

  • clients and services do not need remote communication
  • any client can join remote services via any communication protocol
  • any service is implicitly accessible from anywhere, with any protocol

Back to top


Other tools

tap-aggregator: quality assurance & test results aggregation

Back to top


Webhooksd: verify recipes.

Webhooksd provides an automatic verification of the recipes, based on new application or library version. Paired with a build system, new recipes received in the repository create packages for a couple of architectures (x86_64, ARM, others will follow).

Back to top


Still in discussion

For the simple users, we want to provide an unified web interface to manage the system and online services. We currently use Crystal to work on service back-ends for a variety of projects, we are satisfied on a technical level and we are productive, it is highly probable we continue using it. The front-end is still in discussion: we currently use livescript and it is way more usable than plain JS, but it is painful to debug.

So, we need a language for both administration dashboard and online services, here are some examples we find interesting for the job:

  • Purescript
    • haskell-like syntax, with a smaller set of notions
    • strongly typed, with type inference
    • good documentation
    • useful compilation errors
    • no runtime error
  • Elm
    • as Purescript but with way less documentation (but reading the code is sometimes enough here)
    • less generic code (functions such as fold and map have hardcoded type), which feels a bit hacky
    • still very young
  • WASM
    • seems to be a very young tech, with no real good language or documentation
    • Zig has wasm as a Tier 1 support, we should investigate

And we should implement a generic framework, QML was the way all along (but without all the historic tooling and without C++ it would be awesome!).

Next

Next step is to use or even become a contributor.