dodb.cr/paper/paper.ms
2024-05-30 15:58:47 +02:00

1249 lines
47 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

.so macros.roff
.de TREE1
.QP
.ps -3
.vs -3
.KS
.ft CW
.b1
.nf
..
.de TREE2
.ft
.fi
.b2
.ps
.vs
.KE
.QE
..
.de CLASS
.I \\$*
..
.de FUNCTION_CALL
.I "\\$*"
..
.
.de COMMAND
.I "\\$*"
..
.de DIRECTORY
.ps -2
.I "\\$*"
.ps +2
..
.de PRIMARY_KEY
.I \\$1 \\$2 \\$3
..
.de FOREIGN_KEY
.I \\$1 \\$2 \\$3
..
.
.
. \" The document starts here.
.
.TITLE Document Oriented DataBase (DODB)
.AUTHOR Philippe PITTOLI
.ABSTRACT1
DODB is a database-as-library, enabling a very simple way to store applications' data: storing serialized
.I documents
(basically any data type) in plain files.
To speed-up searches, attributes of these documents can be used as indexes.
DODB can provide a file-system representation of those indexes through a few symbolic links
.I symlinks ) (
on the disk.
This enables administrators to search for data outside the application with the most basic tools, like
.I ls .
This document briefly presents DODB and its main differences with other database engines.
Limits of such approach are discussed.
An experiment is described and analyzed to understand the performance that can be expected.
.ABSTRACT2
.SINGLE_COLUMN
.SECTION Introduction to DODB
A database consists in managing data, enabling queries (preferably fast) to retrieve, to modify, to add and to delete a piece of information.
Anything else is
.UL accessory .
Universities all around the world teach about Structured Query Language (SQL) and relational databases.
.
.UL "Relational databases"
are built around the idea to put data into
.I tables ,
with typed columns so the database can optimize operations and storage.
A database is a list of tables with relations between them.
For example, let's imagine a database of a movie theater.
The database will have a
.I table
for the list of movies they have
.PRIMARY_KEY idmovie , (
title, duration, synopsis),
a table for the scheduling
.PRIMARY_KEY idschedule , (
.FOREIGN_KEY idmovie ,
.FOREIGN_KEY idroom ,
time slot),
a table for the rooms
.PRIMARY_KEY idroom , (
name), etc.
Tables have relations, for example the table "scheduling" has a column
.I idmovie
which points to entries in the "movie" table.
.UL "The SQL language"
enables arbitrary operations on databases: add, search, modify and delete entries.
Furthermore, SQL also enables to manage administrative operations of the databases themselves: creating databases and tables, managing users with fine-grained authorizations, etc.
SQL is used between the application and the database, to perform operations and to provide results when due.
SQL is also used
.UL outside
the application, by admins for managing databases and potentially by some
.I non-developer
users to retrieve some data without a dedicated interface\*[*].
.FOOTNOTE1
One of the first objectives of SQL was to enable a class of
.I non-developer
users to talk directly to the database so they can access the data without bothering the developers.
This has value for many companies and organizations.
.FOOTNOTE2
Many tools were used or even developed over the years specifically to aleviate the inherent complexity and limitations of SQL.
For example, designing databases becomes difficult when the list of tables grows;
Unified Modeling Language (UML) is then used to provide a graphical overview of the relations between tables.
SQL databases may be fast to retrieve data despite complicated operations, but when multiple sequential operations are required they become slow because of all the back-and-forths with the application;
thus, SQL databases can be scripted to automate operations and provide a massive speed up
.I "stored procedures" , (
see
.I "PL/SQL" ).
Writing SQL requests requires a lot of boilerplate since there is no integration in the programming languages, leading to multiple function calls for any operation on the database;
thus, object-relational mapping (ORM) libraries were created to reduce the massive code duplication.
And so on.
For many reasons, SQL is not a silver bullet to
.I solve
the database problem.
The encountered difficulties mentioned above and the original objectives of SQL not being universal\*[*], other database designs were created\*[*].
.FOOTNOTE1
To say the least!
Not everyone needs to let users access the database without going through the application.
For instance, writing a \f[I]blog\f[] for a small event or to share small stories about your life doesn't require manual operations on the database, fortunately.
.FOOTNOTE2
.FOOTNOTE1
A lot of designs won't be mentioned here.
The actual history of databases is often quite unclear since the categories of databases are sometimes vague, underspecified.
As mentioned, SQL is not a silver bullet and a lot of developers shifted towards other solutions, that's the important part.
.FOOTNOTE2
The NoSQL movement started because the stated goals of many actors from the early Web boom were different from SQL.
The need for very fast operations far exceeded what was practical at the moment with SQL.
This led to the use of more basic methods to manage data such as
.I "key-value stores" ,
which simply associate a value with an
.I index
for fast retrieval.
In this case, there is no need for the database to have
.I tables ,
data may be untyped, the entries may even have different attributes.
Since homogeneity is not necessary anymore, databases have fewer (or different) constraints.
Document-oriented databases are a sub-class of key-value stores, where metadata can be extracted from the entries for further optimizations.
And that's exactly what is being done in Document Oriented DataBase (DODB).
.UL "Contrary to SQL" ,
DODB has a very narrow scope: to provide a library enabling to store, retrieve, modify and delete data.
In this way, DODB transforms any application in a database manager.
DODB doesn't provide an interactive shell, there is no request language to perform arbitrary operations on the database, no statistical optimizations of the requests based on query frequencies, etc.
Instead, DODB reduces the complexity of the infrastructure, stores data in plain files and enables simple manual scripting with widespread unix tools.
Simplicity is key.
.UL "Contrary to other NoSQL databases" ,
DODB doesn't provide an application but a library, nothing else.
The idea is to help developers to store their data themselves, not depending on
. I yet-another-all-in-one
massive tool.
The library writes (and removes) data on a storage device, has a few retrieval and update mechanisms and that's it\*[*].
.FOOTNOTE1
The lack of features
.I is
the feature.
Even with that motto, the tool still is expected to be convenient for most applications.
.FOOTNOTE2
This document will provide an extensive documentation on how DODB works and how to use it.
The presented code is in Crystal such as the DODB library for now, but keep in mind that this document is all about the method more that the actual implementation, anyone could implement the exact same library in almost every other language.
Limitations are also clearly stated in a dedicated section.
A few experiments are described to provide an overview of the performance you can expect from this approach.
Finally, a conclusion is drawn based on a real-world usage of this library.
.
.SECTION How DODB works and basic usage
DODB is a hash table.
The key of the hash is an auto-incremented number and the value is the stored data.
The following section will explain how to use DODB for basic cases including the few added mechanisms to speed-up searches.
Also, the file-system representation of the data will be presented since it enables easy off-application searches.
.
.
.SS Before starting: the example database
First things first, the following code is the structure used in the rest of the document to present the different aspects of DODB.
This is a simple object
.I Car ,
with a name, a color and a list of associated keywords (fast, elegant, etc.).
.SOURCE Ruby ps=9 vs=10
class Car
property name : String
property color : String
property keywords : Array(String)
end
.SOURCE
.
.
.SS DODB basic usage
Let's create a DODB database for our cars.
.SOURCE Ruby ps=9 vs=10
# Database creation
database = DODB::Storage::Uncached(Car).new "path/to/db-cars"
# Adding an element to the db
database << Car.new "Corvet", "red", ["elegant", "fast"]
# Reaching all objects in the database
database.each do |car|
pp! car
end
.SOURCE
When a value is added, it is serialized\*[*] and written in a dedicated file.
.FOOTNOTE1
Serialization is currently in JSON.
.[
JSON
.]
CBOR
.[
CBOR
.]
is a work-in-progress.
Nothing binds DODB to a particular format.
.FOOTNOTE2
The key of the hash is a number, auto-incremented, used as the name of the stored file.
The following example shows the content of the file system after adding the first car.
.TREE1
$ tree db-cars/
db-cars/
|-- data
| `-- 0000000000 <- the first car in the database
`-- last-index
.TREE2
In this example, the directory
.I db-cars/data
contains the serialized value, with a formated number as file name.
The file "0000000000" contains the following:
.QP
.SOURCE JSON ps=9 vs=10
{
"name": "Corvet",
"color": "red",
"keywords": [
"elegant",
"fast"
]
}
.SOURCE
The car is serialized as expected in the file
.I 0000000000 .
.QE
.
.
Next step, to retrieve, to modify or to delete a value, its key will be required.
.
.QP
.SOURCE Ruby ps=9 vs=10
# Get a value based on its key.
database[key]
# Update a value based on its key.
database[key] = new_value
# Delete a value based on its key.
database.delete 0
.SOURCE
.QE
.
The function
.FUNCTION_CALL each_with_key
lists the entries with their keys.
.
.QP
.SOURCE Ruby ps=9 vs=10
database.each_with_key do |value, key|
puts "#{key}: #{value}"
end
.SOURCE
.QE
Of course, browsing the entire database to find a value (or its key) is a waste of resources and isn't practical for any non-trivial database.
That is when indexes come into play.
.
.
.SS Triggers
A simple way to quickly retrieve a piece of data is to create
.I indexes
based on its attributes.
When a value is added to the database, or when it is modified, a
.I trigger
can be called to index it.
There are currently three main triggers in
.CLASS DODB
to index values: basic indexes, partitions and tags.
.
.SSS Basic indexes (1 to 1 relations)
Basic indexes
.CLASS DODB::Trigger::Index ) (
represent one-to-one relations, such as an index in SQL.
In the Car database, each car has a dedicated (unique) name.
This
.I name
attribute can be used to speed-up searches.
.QP
.SOURCE Ruby ps=9 vs=10
# Create an index based on the "name" attribute of the cars.
cars_by_name = cars.new_index "name", do |car|
car.name
end
# Two other ways to say the same thing, thanks to the Crystal syntax:
cars_by_name = cars.new_index "name", { |car| car.name }
cars_by_name = cars.new_index "name", &.name
.SOURCE
Once the index has been created, every added or modified entry in the database will be indexed.
Adding a trigger provides an
.I object
used to manipulate the database based on the related attribute.
Let's call it an
.I "index object" .
In the code above, the
.I "index object"
is named
.I "cars_by_name" .
.QE
The
.I "index object"
has several useful functions.
.QP
.SOURCE Ruby ps=9 vs=10
# Retrieve the car named "Corvet".
corvet = cars_by_name.get? "Corvet"
# Modify the car named "Corvet".
new_car = Car.new "Corvet", "green", ["eco-friendly"]
cars_by_name.update "Corvet", new_car
# In case the index hasn't changed (the name attribute in this example),
# the update can be even simpler.
cars_by_name.update new_car
# Delete the car named "Corvet".
cars_by_name.delete "Corvet"
.SOURCE
A car can now be searched, modified or deleted based on its name.
.QE
.
.
On the file-system, indexes are represented as symbolic links.
.TREE1
storage
+-- data
|  `-- 0000000000 <- the car named "Corvet"
`-- indexes
   `-- by_name
   `-- Corvet -> ../../data/0000000000
.TREE2
.QP
As shown, the file "Corvet" is a symbolic link to a data file.
The name of the symlink file has been extracted from the value itself, enabling to list all the cars and their names with a simple
.COMMAND ls
in the
.DIRECTORY storage/indexes/by_name/
directory.
.QE
.
The basic indexes as shown in this section already give a taste of what is possible to do with DODB.
The following triggers will cover some other usual cases.
.
.
.SSS Partitions (1 to n relations)
An attribute can have a value that is shared by other entries in the database, such as the
.I color
attribute of our cars.
.QP
.SOURCE Ruby ps=9 vs=10
# Create a partition based on the "color" attribute of the cars.
cars_by_color = database.new_partition "color", do |car|
car.color
end
# Shortcut:
cars_by_color = database.new_partition "color", &.color
.SOURCE
.QE
As with basic indexes, once the partition is asked to the database, every new or modified entry will be indexed.
.KS
Let's imagine having 3 cars, one is blue and the other two are red.
.TREE1
$ tree db-cars/
db-cars
+-- data
|  +-- 0000000000 <- this car is blue
|  +-- 0000000001 <- this car is red
|  `-- 0000000002 <- this car is red, too
| ...
`-- partitions
   `-- by_color
+-- blue
  `-- 0000000000 -> 0000000000
`-- red
  +-- 0000000001 -> 0000000001
  `-- 0000000002 -> 0000000002
.TREE2
.QP
Listing all the blue cars is simple as running
.COMMAND ls
in the
.DIRECTORY db-cars/partitions/by_color/blue
directory!
.QE
.KE
.
.
.
.SSS Tags (n to n relations)
Tags are basically partitions but the indexed attribute can have multiple values.
.
.QP
.SOURCE Ruby ps=9 vs=10
# Create a tag based on the "keywords" attribute of the cars.
cars_by_keywords = database.new_tags "keywords", do |car|
car.keywords
end
# Shortcut:
cars_by_keywords = database.new_tags "keywords", &.keywords
.SOURCE
As with other indexes, once the tag is requested to the database, every new or modified entry will be indexed.
.QE
.
.
.KS
Let's imagine having two cars with different associated keywords.
.TREE1
$ tree db-cars/
db-cars
+-- data
|  +-- 0000000000 <- this car is fast and cheap
|  `-- 0000000001 <- this car is fast and elegant
`-- tags
   `-- by_keywords
+-- cheap
`-- 0000000000 -> 0000000000
+-- elegant
`-- 0000000001 -> 0000000001
`-- fast
+-- 0000000000 -> 0000000000
`-- 0000000001 -> 0000000001
.TREE2
.QP
Listing all the fast cars is simple as running
.COMMAND ls
in the
.DIRECTORY db-cars/tags/by_keywords/fast
directory!
.QE
.KE
.
.
.
.SSS Side note about triggers
DODB presents a few possible triggers (basic indexes, partitions and tags) which respond to an obvious need for fast searches.
Though, their implementation via the creation of symlinks is the result of a certain vision about how a database should behave in order to provide a practical way for users to sort the entries.
The implementation can be completely changed.
Also, other kinds of triggers could
.B easily
be implemented in addition of those presented.
The new triggers may have completely different objectives than providing a file-system representation of the data.
The following sections will precisely cover this aspect.
.
.
.SECTION DODB, slow? Nope. Let's talk about caches
The file-system representation (of data and indexes) is convenient for the administrator, but input-output operations on a file-system are slow.
Storing the data on a storage device is required to protect it from crashes and application restarts.
But data can be kept in memory for faster processing of requests.
The DODB library has an API close to a hash table.
Having a data cache is as simple as keeping a hash table in memory besides providing a file-system storage, the retrieval becomes incredibly fast\*[*].
.FOOTNOTE1
Several hundred times faster, see the experiment section.
.FOOTNOTE2
Same thing for cached indexes.
Indexes can easily be cached, thanks to simple hash tables.
.B "Cached database" .
A cached database has the same API as the other DODB databases and keeps a copy of the entire database in memory for fast retrieval.
.QP
.SOURCE Ruby ps=9 vs=10
# Create a cached database
database = DODB::Storage::Cached(Car).new "path/to/db-cars"
.SOURCE
All operations of the
.CLASS Storage::Uncached
class are available for
.CLASS Storage::Cached .
.QE
.
.B "Cached indexes" .
Since indexes do not require nearly as much memory as caching the entire database, they are cached by default.
.
.
.
.SECTION Common database
Storing the entire data-set in memory is an effective way to make the requests fast, as does
the
.I "cached database"
presented in the previous section.
Not all data-sets are compatible with this approach, for obvious reasons.
Thus, a tradeoff could be found to enable fast retrieval of data without requiring much memory.
Caching only a part of the data-set could already enable a massive speed-up even in memory-constrained environments.
The most effective strategy could differ from an application to another\*[*].
.FOOTNOTE1
Providing a generic algorithm that should work for all possible constraints is an hazardous endeavor.
.FOOTNOTE2
However, caching only the most recently requested values is a simple policy which may be efficient in many cases.
This strategy is implemented in the
.CLASS DODB::Storage::Common
database and this section will explain how it works.
Common database implements a simple strategy to keep only relevant values in memory:
caching
.I "recently used"
values.
Any value that is requested or added to the database is considered
.I recent .
.B "How this works" .
Each time a value is added in the database, its key is put as the first element of a list.
In this list,
.B "values are unique" .
Adding a value that is already present in the list is considered as
.I "using the value" ,
thus it is moved at the start of the list.
In case the number of entries exceeds what is allowed,
the least recently used value (the last list entry) is removed,
along with its related data from the cache.
.B "Implementation details" .
The implementation is time-efficient;
the duration of adding a value is almost constant, it doesn't change much with the number of entries.
This efficiency is a memory tradeoff.
All the entries are added to a
.B "double-linked list"
(to keep track of the order of the added keys)
.UL and
to a
.B hash
to perform efficient searches of the keys in the list.
Thus, all the nodes are added twice, once in the list, once in the hash.
This way, adding, removing and searching for an entry in the list is fast,
no matter the size of the list.
Moreover,
.I "common database"
enables to adjust the number of stored entries.
.
.QP
.SOURCE Ruby ps=9 vs=10
# Create a database with a data cache limited to 100.000 entries
database = DODB::Storage::Common(Car).new "path/to/db-cars", 100000
.SOURCE
The
.CLASS Storage::Common
class has the same API as the other database classes.
.QE
.
.SECTION RAM-only database for short-lived data
Databases are built around the objective to actually
.I store
data.
But sometimes the data has only the same lifetime as the application.
Stop the application and the data itself become irrelevant, which happens in several occasions, for instance when the application keeps track of the connected users.
This case is not covered by traditional databases; this is out-of-scope, short-lived data only is handled within the application.
Yet, since DODB is a library and not a separate application (read: DODB is incredibly faster), this usage of the database can be relevant.
Having the same API to handle both long and short-lived data can be useful.
Moreover, the previously mentioned indexes (basic indexes, partitions and tags) would also work the same way for these short-lived data.
Of course, in this case, the file-system representation may be completely irrelevant.
And for all these reasons, the
.I RAM-only
DODB database and
.I RAM-only
indexes were created.
Let's recap the advantages of the RAM-only DODB database.
The DODB API is the same for short-lived (read: temporary) and long-lived data.
This includes the same indexes too, so a file-system representation of the current state of the application is possible.
RAM-only also means incredible performances since DODB only is a
.I very
small layer over a hash table.
.SS RAM-only database
Instanciate a RAM-only database is as simple as the other options.
Moreover, this database has exactly the same API as the others, thus changing from one to another is painless.
.QP
.SOURCE Ruby ps=9 vs=10
# RAM-only database creation
database = DODB::Storage::RAMOnly(Car).new "path/to/db-cars"
.SOURCE
Yes, the path still is required which may be seen as a quirk but the rationale\*[*] is sound.
.QE
.FOOTNOTE1
A path is still required despite the database being only in memory because indexes can still be instanciated for the database, and those indexes will require this directory.
Also, I worked enough already, leave me alone.
.FOOTNOTE2
.SS RAM-only indexes
Indexes have their RAM-only version.
.QP
.SOURCE Ruby ps=9 vs=10
# RAM-only basic indexes.
cars_by_name = cars.new_RAM_index "name", &.name
# RAM-only partitions.
cars_by_color = cars.new_RAM_partition "color", &.color
# RAM-only tags.
cars_by_keywords = cars.new_RAM_tags "keywords", &.keywords
.SOURCE
The API of the
.I "RAM-only index objects"
is exactly the same as the others.
.QE
As for the database API itself, changing from a version of an index to another is painless.
This way, one can opt for a cached index and, after some time not using the file-system representation, decide to change for its RAM-only version; a 4-character modification and nothing else.
.
.
.
.SECTION DODB and memory constraint
Some environments may have very peculiar constraints, where caching data would cause problems or would be inefficient anyway\*[*].
.FOOTNOTE1
Caching would be inefficient for databases where the distribution of requests is homogeneous between the different entries, for example.
If the requests are random, without a small portion of the data receiving most requests (such as a Pareto distribution), caching becomes mostly irrelevant.
.FOOTNOTE2
In these cases, the
.CLASS "DODB::Storage::Uncached"
can be used\*[*].
.FOOTNOTE1
However, the
.CLASS DODB::Storage::Common
should be considered instead for most applications, even if the configured number of entries is low due to low RAM.
.FOOTNOTE2
.
.QP
.SOURCE Ruby ps=9 vs=10
# Uncached database creation
database = DODB::Storage::Uncached(Car).new "path/to/db-cars"
.SOURCE
.QE
.B "Uncached indexes" .
Cached indexes do not require a large amount of memory since the only stored data is an integer (the
.I key
of the data).
For that reason, indexes are cached by default.
But for highly memory-constrained environments, the cache can be removed.
.QP
.SOURCE Ruby ps=9 vs=10
# Uncached basic indexes.
cars_by_name = cars.new_uncached_index "name", &.name
# Uncached partitions.
cars_by_color = cars.new_uncached_partition "color", &.color
# Uncached tags.
cars_by_keywords = cars.new_uncached_tags "keywords", &.keywords
.SOURCE
The API of the
.I "uncached index objects"
is exactly the same as the others.
.QE
.
.
.SECTION Limits of DODB
DODB provides basic database operations such as storing, searching, modifying and removing data.
Though, SQL databases have a few
.I properties
enabling a more standardized behavior and may create some expectations towards databases from a general public standpoint.
These properties are called "ACID": atomicity, consistency, isolation and durability.
DODB doesn't fully handle ACID properties.
DODB doesn't provide
.I atomicity .
Instructions cannot be chained and rollback if one of them fails.
DODB doesn't handle
.I consistency .
There is currently no mechanism to prevent adding invalid values.
.I Isolation
is partially taken into account with a locking mechanism preventing race conditions.
Though, parallelism is mostly required to respond to a large number of clients at the same time.
Also, SQL databases require a communication with an inherent latency between the application and the database, slowing down the requests despite the fast algorithms to search for a value within the database.
Parallelism is required for SQL databases because of this latency (at least partially), which doesn't exist with DODB\*[*].
.FOOTNOTE1
FYI, the service
.I netlib.re
uses DODB and since the database is fast enough, parallelism isn't required despite enabling more than a thousand requests per second.
.FOOTNOTE2
With a cache, data is retrieved five hundred times quicker than with a SQL database.
Thus, parallelism is probably not needed but a locking mechanism is provided anyway, just in case; this may be overly simplistic but
.SHINE "good enough"
for most applications.
.I Durability
is taken into account.
Data is written on disk each time it changes.
Again, this is basic but
.SHINE "good enough"
for most applications.
.B "Discussion on ACID properties" .
The author of this document sees these database properties as a sort of "fail-safe".
Always nice to have, but not entirely necessary; at least not for every single application.
DODB will provide some form of atomicity and consistency at some point, but nothing fancy nor too advanced.
The whole point of the DODB project is to keep the code simple (almost
.B "stupidly"
simple).
Not handling these properties isn't a limitation of the DODB approach but a choice for this project\*[*].
.FOOTNOTE1
Which results from a lack of time, mostly.
.FOOTNOTE2
Not handling all the ACID properties within the DODB library doesn't mean they cannot be achieved.
Applications can have these properties, often with just a few lines of code.
They just don't come
.I "by default"
with the library\*[*].
.FOOTNOTE1
As a side note, the
.I consistency
property is often taken care of within the application despite being handled by the database, for various reasons.
.FOOTNOTE2
.
.
.
.SECTION Experimental scenario
.LP
The following experiment shows the performance of DODB based on querying durations.
Data can be searched via
.I indexes ,
as for SQL databases.
Three possible indexes exist in DODB:
(a) basic indexes, representing 1 to 1 relations, the document's attribute is related to a value and each value of this attribute is unique,
(b) partitions, representing 1 to n relations, the attribute has a value and this value can be shared by other documents,
(c) tags, representing n to n relations, enabling the attribute to have multiple values whose are shared by other documents.
The scenario is simple: adding values to a database with indexes (basic, partitions and tags) then query 100 times a value based on the different indexes.
Loop and repeat.
Five instances of DODB are tested:
.BULLET \fIuncached database\f[] shows the achievable performance with a strong memory constraint (nothing can be kept in-memory);
.BULLET \fIuncached database but cached index\f[] shows the improvement you can expect by having a cache on indexes;
.BULLET \fIcommon database\f[] shows the most basic use of DODB, with a limited cache (100k entries)\*[*];
.BULLET \fIcached database\f[] represents a database will all the entries in cache (no eviction mechanism);
.BULLET \fIRAM only\f[], the database doesn't have a representation on disk (no data is written on it).
The \fIRAM only\f[] instance shows a possible way to use DODB: to keep a consistent API to store data, including in-memory data with a lifetime related to the application's.
.ENDBULLET
.FOOTNOTE1
Having a cached database will probably be the most widespread use of DODB.
When memory isn't scarce, there is no point not using it to achieve better performance.
Moreover, the "common database" enables to configure the cache size, so this database is relevant even when the data-set is bigger than the available memory.
.FOOTNOTE2
The computer on which this test is performed\*[*] is a AMD PRO A10-8770E R7 (4 cores), 2.8 GHz.When mentioned, the
.I disk
is actually a
.I "temporary file-system (tmpfs)"
to enable maximum efficiency.
.FOOTNOTE1
A very simple $50 PC, buyed online.
Nothing fancy.
.FOOTNOTE2
The library is written in Crystal and so is the benchmark (\f[CW]spec/benchmark-cars.cr\f[]).
Nonetheless, despite a few technicalities, the objective of this document is to provide an insight on the approach used in DODB more than this particular implementation.
The manipulated data type can be found in \f[CW]spec/db-cars.cr\f[].
.SOURCE Ruby ps=9 vs=9p vs=10
class Car
property name : String # 1-1 relation
property color : String # 1-n relation
property keywords : Array(String) # n-n relation
end
.SOURCE
.
.
.SS Basic indexes (1 to 1 relations)
.LP
An index enables to match a single value based on a small string.
In our example, each \f[CW]car\f[] has an unique \fIname\f[] which is used as an index.
The following graph represents the result of 100 queries of a car based on its name.
The experiment starts with a database containing 1,000 cars and goes up to 250,000 cars.
.ps -2
.so graphs/query_index.grap
.ps \n[PS]
.QP
This figure shows the request durations to retrieve data based on a basic index with a database containing up to 250k entries, both with linear and logarithmic scales.
.QE
Since there is only one value to retrieve, the request is quick and time is almost constant.
When the value and the index are kept in memory (see \f[CW]RAM only\f[], \f[CW]Cached db\f[] and \f[CW]Common db\f[]), the retrieval is almost instantaneous\*[*].
.FOOTNOTE1
About 110 to 120 ns for RAM-only and cached database.
This is slightly more (about 200 ns) for Common database since there is a few more steps due to the inner structure to maintain.
.FOOTNOTE2
In case the value is on the disk, deserialization takes about 15 µs (see \f[CW]Uncached db\f[]).
The request is a little longer when the index isn't cached (see \f[CW]Uncached db and index\f[]); in this case DODB walks the file-system to find the right symlink to follow, thus slowing the process even more, by up to 20%.
The logarithmic scale version of this figure shows that \fIRAM-only\f[] and \fIcached\f[] databases have exactly the same performance.
The \fIcommon\f[] database is somewhat slower than these two due to the caching policy: when a value is asked, the \fIcommon\f[] database puts its key at the start of a list to represent a
.I recent
use of this data (respectively, the last values in this list are the least recently used entries).
Thus, the \fIcommon\f[] database takes 80 ns for its caching policy, which makes this database about 67% slower than the previous ones to retrieve a value.
Uncached databases are far away from these results, as shown by the logarithmically scaled figure.
The data cache improves the duration of the requests, this makes them at least 170 times faster.
The results depend on the data size; the bigger the data, the slower the serialization (and deserialization).
In this example, the database entries are almost empty; they have very few attributes and not much content (a few dozen characters max).
Thus, performance of non-cached databases will be even more severely impacted with real-world data.
That is why alternative encodings, such as CBOR,
.[
CBOR
.]
should be considered for large databases.
.
.
.SS Partitions (1 to n relations)
The previous example shown the retrieval of a single value from the database.
The following will show what happens when thousands of entries are retrieved.
A partition index enables to match a list of entries based on an attribute.
In the experiment, a database of cars is created along with a partition on their color.
Performance is analyzed based the partition size (the number of red cars) and the duration to retrieve all the entries.
.ps -2
.so graphs/query_partition.grap
.ps \n[PS]
.QP
This figure shows the retrieval of cars based on a partition (their color), with both a linear and a logarithmic scale.
The number of cars retrieved scales from 2000 to 10000.
.QE
In this example, both the linear and the logarithmic scales are represented to better grasp the difference between all databases.
The linear scale shows the linearity of the request time for uncached databases.
Respectively, the logarithmically scaled figure does the same for cached databases,
which are flattened in the linear scale since they are between one to five hundred times quicker than the uncached ones.
The duration of a retrieval grows linearly with the number of matched entries.
On both figures, a dashed line is drawn representing a linear growth based on the quickest retrieval observed from basic indexes for each database.
This dashed line and the observed results differ slightly; observed results grow more than what has been calculated.
This difference comes, at least partially, from the additional process of putting all the results in an array (which may also include some memory management) and the accumulated random delays for the retrieval of each value (due to processus scheduling on the machine, for example).
Further analysis of the results may be interesting but this is far beyond the scope of this document.
The objective of this experiment is to give an idea of the performance that can be expected from DODB.
Basically, uncached databases are between 70 to 600 times slower than cached ones.
The eviction policy in
.I common
database slows down the retrievals, which makes it 70% to 6 times slower than
.I cached
and
.I RAM-only
databases, and the more data there is to retrieve, the worst it gets.
However, retrieving thousands and thousands of entries in a single request may not be a typical usage of databases, anyway.
.
.
.SS Tags (n to n relations)
A tag index enables to match a list of entries based on an attribute with potentially multiple values (such as an array).
In the experiment, a database of cars is created along with a tag index on a list of
.I keywords
associated with the cars, such as "elegant", "fast" and so on.
Performance is analyzed based the number of entries retrieved (the number of elegant cars) and the request duration.
.
.ps -2
.so graphs/query_tag.grap
.ps \n[PS]
.QP
This figure shows the retrieval of cars based on a tag (all cars tagged as
.I elegant ),
with both a linear and a logarithmic scale.
The number of cars retrieved scales from 1000 to 5000.
.QE
.
.
The results are similar to the retrivial of partition indexes, because this is fundamentally the same thing:
.ENUM both tag and partition indexes enable to retrieve a list of entries;
.ENUM the keys of the database entries come from listing the content of a directory (uncached indexes) or are directly available from a hash (cached indexes);
.ENUM data is retrieved irrespective of the index, it is either read from the storage device or retrieved from a data cache, which depends on the type of database.
.ENDENUM
Retrieving data from a partition or a tag involves exactly the same actions, which leads to the same results.
A particularity of the tag index compared to partitions is that it enables multiple values for the same attribute, thus a database entry can be referenced in multiple directories.
For example, a car can be both
.I elegant
and
.I fast .
The retrieval of entries corresponding to a single
.I tag
is then exactly similar to retrieving a partition\*[*].
.FOOTNOTE1
It would be different in case of a retrieval of entries corresponding to
.I several
tags, such as selecting cars that are
.UL "both elegant and fast" .
This test may be done in a future version of this document.
.FOOTNOTE2
.
.
.SS Summary of the different databases and their use
.LP
.B "RAM-only database"
is the fastest database but has a limited use since data isn't saved.
.B "Cached database"
enables the same performance on data retrieval than RAM-only while actually storing data on a storage device.
This database is to be considered to achieve maximum speed for data-sets fitting in memory.
.B "Common database"
enables to lower the memory requirements as much as desired.
The eviction policy implies some operations which leads to poorer performances, however still acceptable.
.B "Uncached database"
is mostly in this experiment as a control sample, to see what could be the worst possible performances of DODB.
Cached indexes should be considered for most applications, or even their RAM-only version in case the file-system representation isn't necessary.
.
.\" .ps -2
.\" .TS
.\" allbox tab(:);
.\" c | lw(3.6i) | cew(1.4i).
.\" DODB instance:Comment and database usage:T{
.\" compared to RAM-only
.\" T}
.\" RAM only:T{
.\" Worst memory footprint, best performance.
.\" T}:-
.\" Cached db and index:T{
.\" Performance for retrieving a value is the same as RAM only while
.\" enabling the admin to manually search for data on-disk.
.\" T}:about the same perfs
.\" Common db, cached index:T{
.\" Performance is still excellent while requiring a
.\" .UL configurable
.\" amount of RAM.
.\" Should be used by default.
.\" T}:T{
.\" 67% slower (about 200 ns) which still is great
.\" T}
.\" Uncached db, cached index:Very slow. Common database should be considered instead.:170 to 180x slower
.\" Uncached db and index:T{
.\" Best memory footprint, worst performance.
.\" T}:200 to 210x slower
.\" .TE
.\" .ps \n[PS]
.
.SS Conclusion on performance
As expected, retrieving a single value is fast and the size of the database doesn't matter much.
Each deserialization and, more importantly, each disk access is a pain point.
Caching the value enables a massive performance gain, data can be retrieved several hundred times quicker.
The more entries requested, the slower it gets; but more importantly, the poorer performances it gets
.UL "per entry" .
The eviction policy also implies poorer performances since it requires operations to select the data to cache.
However, the implementation is as simple as it gets, and some approaches could be considered to make it faster.
Notably, specific data-sets or database uses could lead to adapt the eviction policy.
Same thing for the entire caching mechanism.
The current implementation offers a simple and generic way to store data based on typical database uses.
As a side note, let's keep in mind that requesting several thousand entries in DODB, with the common database for instance, is as slow as getting
.B "a single entry"
with SQL (varies from 0.1 to 2 ms on my machine for a single value without a search, just the first available entry).
This should help put things into perspective.
.
.SECTION Alternatives
Other approaches have been used beside SQL.
.B "Memcached"
.B "duckdb"
.TBD
.
.SECTION Future work
This section presents all the features I want to see in a future version of the DODB library.
.
.SS Pagination via the indexes: offset and limit
Right now, browsing the entire database by requesting a limited list at a time is possible, thanks to some functions accepting an
.I offset
and a
.I size .
However, this is not possible with the indexes, thus when querying for example a partition the API provides the entire list of matching values.
This is not acceptable for databases with large partitions and tags: memory will be over-used and requests will be slow.
.
.SS DODB and security
Right now, security isn't managed in DODB, at all.
Sure, DODB isn't vulnerable to SQL injections, but an internet-facing application may encounter a few other problems including, but not limited to, code injection, buffer overflows, etc.
Of course, DODB isn't a mechanism to protect applications from any possible attack, so most of the vulnerabilities cannot be countered by the library.
However, a few security mechanisms exist to prevent data leak or data modification from an outsider and the DODB library may implement some of them in the future.
.B "Preventing data leak" .
Since DODB is a library, any attack on the application using it can lead to a data leak.
For the moment, any part of the application can access data stored in memory.
Operating systems provide system calls to protect parts of the allocated memory.
For instance,
.FUNCTION_CALL mlock
prevents a region of memory from being put in the swap because it could lead to a data leak.
The
.FUNCTION_CALL madvise
syscall can prevent parts of the application's memory to be put in a code-dump\*[*], which is a debug file created when an application crashes containing (part of) the process's memory when it crashed.
.FOOTNOTE1
.FUNCTION_CALL madvice
has the interesting option
.I MADV_DONTDUMP
to prevent a data leak through a core-dump, but it is linux-specific.
.FOOTNOTE2
In a running process,
.FUNCTION_CALL mprotect
prevents the application itself to access part of its own memory;
the idea is to read (or write) memory only once you ask for it via a syscall.
Thus, you cannot access data from anywhere (by mistake or after an attack).
These mechanisms could be used internally by DODB to prevent a data leak since memory is handled by the library.
However, the Crystal language doesn't provide a way to manage memory manually and this may be a problem for mlock and mprotect.
Depending on the plateform (the operating system), these syscalls may require the memory to be aligned with the memory pages.
Thus, the implementation won't be easy.
.B "Side-note. Discussion on security" .
No authorization mechanism prevents the application to access un-authorized data, including, but not limited to, any file on the file-system.
Since this implementation of DODB is related to the Crystal language (which isn't fully ported to the OpenBSD plateform at-the-moment), this is a problem.
.
.
.SECTION Conclusion
The
.I common
database should be an acceptable choice for most applications.
.TBD
.APPENDIX FIFO vs Efficient FIFO
.ps -2
.so graphs/addition_fifo.grap
.ps \n[PS]
.APPENDIX Common database performance
The
.I Common
database enables to configure the number of allowed entries in the cache.
The following figures show the performance of the common database depending on the cache size.
.ps -2
.so graphs/fifo_query_index.grap
.ps \n[PS]
.QP
This figure shows the request durations to retrieve data based on a basic index with a database containing up to 250k entries.
.QE
.EQ
delim $$
.EN
This figure shows a value being requested and since there is only a single value being requested in the test, it is immediately put in the cache and is never evicted.
For that reason, the result is stable amongst all
.I common
database instances:
.vp 2p
$+-$ 170 ns.
.EQ
delim off
.EN
.ps -2
.so graphs/fifo_query_partition.grap
.ps \n[PS]
.QP
This figure shows the request durations to retrieve data based on a partition containing up to 10k entries.
.QE
As we see in the figure, the duration for data retrieval grows almost linearly for databases with a sufficient cache size (starting with 10k entries).
When the cache size is not sufficient, the requests are hundred times slower, which explain why the database with a cache size of one thousand entries isn't even represented in the graph, and why the 5k database has great results up to 5k partitions.
.ps -2
.so graphs/fifo_query_tag.grap
.ps \n[PS]
.QP
This figure shows the request durations to retrieve data based on a tag containing up to 5k entries.
.QE
As for partitions, the response time depends on the number of entries to retrieve and the duration increases linearly with the number of elements.
.
.
.APPENDIX Recap of the DODB API
This section provides a quick shorthand manual for the most important parts of the DODB API.
For an exhaustive API documentation, please generate the development documentation for the library.
The command
.COMMAND "make doc"
generates the documentation, then the
.COMMAND "make serve-doc"
command enables to browse the full documentation with a web browser\*[*].
.FOOTNOTE1
The
.COMMAND "make serve-doc"
requires darkhttpd
.[
darkhttpd
.]
but this can be adapted to any other web server.
.FOOTNOTE2
.
.SS Database creation
.QP
.SOURCE Ruby ps=9 vs=10
# Uncached, cached, common and RAM-only database creation.
database = DODB::Storage::Uncached(Car).new "path/to/db"
database = DODB::Storage::Cached(Car).new "path/to/db"
database = DODB::Storage::Common(Car).new "path/to/db", 50000 # nb cache entries
database = DODB::Storage::RAMOnly(Car).new "path/to/db"
.SOURCE
.QE
.
.SS Browsing the database
.QP
.SOURCE Ruby ps=9 vs=10
# List all the values in the database
database.each do |value|
# ...
end
.SOURCE
.QE
.QP
.SOURCE Ruby ps=9 vs=10
# List all the values in the database with their key
database.each_with_key do |value, key|
# ...
end
.SOURCE
.QE
.
.SS Database search, update and deletion with the key (integer associated to the value)
.KS
.QP
.SOURCE Ruby ps=9 vs=10
value = database[key] # May throw a MissingEntry exception
value = database[key]? # Returns nil if the value doesn't exist
database[key] = value
database.delete key
.SOURCE
Side note for the
.I []
function: in case the value isn't in the database, the function throws an exception named
.CLASS DODB::MissingEntry .
To avoid this exception and get a
.I nil
value instead, use the
.I []?
function.
.QE
.KE
.
.
.SS Trigger creation
.QP
.SOURCE Ruby ps=9 vs=10
# Uncached, cached and RAM-only basic indexes.
cars_by_name = cars.new_uncached_index "name", &.name
cars_by_name = cars.new_index "name", &.name
cars_by_name = cars.new_RAM_index "name", &.name
# Uncached, cached and RAM-only partitions.
cars_by_color = cars.new_uncached_partition "color", &.color
cars_by_color = cars.new_partition "color", &.color
cars_by_color = cars.new_RAM_partition "color", &.color
# Uncached, cached and RAM-only tags.
cars_by_keywords = cars.new_uncached_tags "keywords", &.keywords
cars_by_keywords = cars.new_tags "keywords", &.keywords
cars_by_keywords = cars.new_RAM_tags "keywords", &.keywords
.SOURCE
.QE
.
.
.SS Database retrieval, update and deletion with an index
.
.QP
.SOURCE Ruby ps=9 vs=10
# Get a value from a 1-1 index.
car = cars_by_name.get "Corvet" # May throw a MissingEntry exception
car = cars_by_name.get? "Corvet" # Returns nil if the value doesn't exist
.SOURCE
.QE
.
.QP
.SOURCE Ruby ps=9 vs=10
# Get a value from a partition (1-n relations) or a tag (n-n relations) index.
red_cars = cars_by_color.get "red" # empty array if no such cars exist
fast_cars = cars_by_keywords.get "fast" # empty array if no such cars exist
# Several tags can be selected at the same time, to narrow the search.
cars_both_fast_and_expensive = cars_by_keywords.get ["fast", "expensive"]
.SOURCE
.QE
.
The basic 1-1
.I "index object"
can update a value by selecting an unique entry in the database.
.QP
.SOURCE Ruby ps=9 vs=10
car = cars_by_name.update updated_car # If the `name` hasn't changed.
car = cars_by_name.update "Corvet", updated_car # If the `name` has changed.
car = cars_by_name.update_or_create updated_car # Updates or creates the value.
car = cars_by_name.update_or_create "Corvet", updated_car # Same.
.SOURCE
.QE
For deletion, database entries can be selected based on any index.
Partitions and tags can take a block of code to narrow the selection.
.QP
.SOURCE Ruby ps=9 vs=10
cars_by_name.delete "Corvet" # Deletes the car named "Corvet".
cars_by_color.delete "red" # Deletes all red cars.
# Deletes cars that are both slow and expensive.
cars_by_keywords.delete ["slow", "expensive"]
# Deletes all cars that are both blue and slow.
cars_by_color.delete "blue", do |car|
car.keywords.includes? "slow"
end
# Same.
cars_by_keywords.delete "slow", do |car|
car.color == "blue"
end
.SOURCE
.QE
.
.
.SSS Tags: search on multiple keys
The Tag index enables to search for a value based on multiple keys.
For example, searching for all cars that are both fast and elegant can be written this way:
.QP
.SOURCE Ruby ps=9 vs=10
fast_elegant_cars = cars_by_keywords.get ["fast", "elegant"]
.SOURCE
Used with a list of keys, the
.FUNCTION_CALL get
function returns an empty list in case the search failed.
.br
The implementation was designed to be simple (7 lines of code), not efficient.
However, with data and index caches, the search is expected to meet about everyone's requirements, speed-wise, given that the tags are small enough (a few thousand entries).
.QE
.
.