The Living Thing / Notebooks :

Databases

structured data storage for completists and “data science”

tl;dr

I do a a lot of data processing; most of the stores are too inconvenient for me. But here are the most convenient for my needs

Or maybe I can pipeline my python analysis using blaze?

OK, full notes now:

With a focus on slightly specialised data stores for use in my statistical jiggerypokery. Which is to say: I care about analysis of lots of data fast. This is probably inimical to running, e.g. your webapp from the same store, which has different requirements. (Massively concurrent writes, consistency guarantees, many small queries instead of few large) Don’t ask me about that.

I prefer to avoid running a database server at all if I can; At least in the sense of a highly specialized multi-client server process. Those are not optimised for a typical scientific workflow. First stop is in-process non-concurrent-write data storage e.g. HDF5 or sqlite.

However, if you want to mediate between lots of threads/processes/machines updating your data in parallel, a “real” database server can be justified.

OTOH if your data is big enough, perhaps you need a crazy giant distributed store of some kind? Requirements change vastly depending on your scale.

Filesystem stores

Unless your data is very very big, this is what you want.

useful tools:

is a flexible parallel computing library for analytics. Dask emphasizes the following virtues:

howto

Bonus wrinkle: the default python version is slow - you need to install google’s custom protobuff.

You might also want everything not to be hard. Try prototool.

FlatBuffers is an efficient cross platform serialization library for C++, C#, C, Go, Java, JavaScript, PHP, and Python. It was originally created at Google for game development and other performance-critical applications…

Why not use Protocol Buffers, or .. ?

Protocol Buffers is indeed relatively similar to FlatBuffers, with the primary difference being that FlatBuffers does not need a parsing/ unpacking step to a secondary representation before you can access data, often coupled with per-object memory allocation. The code is an order of magnitude bigger, too. Protocol Buffers has neither optional text import/export nor schema language features like unions.

A slow hot mess that has the virtue of many things claiming they use it, although they all use it badly, inconsistently and slowly. Best parsed with pandas or tablib and thereafter ignored

Array stores that are not filesystem stores

TileDB.

Time series/Event crunching/Streaming

Databases at the intersection of storing data and processing streams, usually for time series forecasting.

Heroic is our in-house time series database. We built it to address the challenges we were facing with near real-time data collection and presentation at scale. At the core are two key pieces of technology are Cassandra, and Elasticsearch. Cassandra acts as the primary means of storage with Elasticsearch being used to index all data. We currently operate over 200 Cassandra nodes in several clusters across the world serving over 50 million distinct time series.

Document stores

Want to handle floppy ill-defined documents of ill-specified possibly changing metadata? Already resigned to the process of querying and processing this stuff being depressingly slow and/or storage-greedy?

You’re looking for document stores!

If you are looking at document stores as your primary workhorse, as opposed to something you want to get data out of for other storage, then you have either

Let’s assume number 1, which is common.

So this is probably for the smartphone app version.

Relational databases

Long lists of numbers? Spreadsheet-like tables? Wish to do queries mostly of the sort supported by database engines, such as grouping, sorting and range queries? Sqlite if it fits in memory. (No need to click on that link though, sqlite is already embedded in your tool of choice.) TODO: how to write safely to sqlite from multiple processes through write locks. Also: Mark Litwintschik’s Minimalist Guide to SQLite.

If not, or if you need to handle concurrent writing by multiple processes, MySQL or Postgres. Not because they are best for this job, but because they are common. Honestly, though, unless this is a live production service for many users, you should probably be using a disk-backed store.

Clickhouse for example is a columnar database that avoids some of the problems of row-oriented tabular databases. I guess you could try that? And Amazon Athena turns arbitrary data into SQL-queryable data, apparently. So the skills here are general.

Accessing DBs from python

Maybe you can make numerica work easier using Blaze?

Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar interface to query data living in other data storage systems.

More generally, records, which wraps tablib and sqlalchemy, is good at this.

Crazy giant distributed stores

Ever since google, every CS graduate wants to write one of these. There are dozens of options; you probably need none of them.

[…]is a networking and distributed transaction layer built atop SQLite, the fastest, most reliable, and most widely distributed database in the world.

Bedrock is written for modern hardware with large SSD-backed RAID drives and generous RAM file caches, and thereby doesn’t mess with the zillion hacky tricks the other databases do to eke out high performance on largely obsolete hardware. This results in fewer esoteric knobs, and sane defaults that “just work”.

Not necessarily giant (I mean, I don’t know the how it scales) but convenient for offline/online syncing and definitely distributed, is orbitdb which uses ipfs for its backend.

See parallel computing.

Caches

redis and memcached are the default generic choices here. redis is newer and more flexible. memcached is sometimes faster? Dunno. Perhaps see Why Redis beats Memcached for caching.

See python caches for the practicalities of doing this for one particular languages.

Graph stores

Graph-tuple oriented processing.

graphengine:

GE is also a flexible computation engine powered by declarative message passing. GE is for you, if you are building a system that needs to perform fine-grained user-specified server-side computation.

From the perspective of graph computation, GE is not a graph system specifically optimized for a certain graph operation. Instead, with its built-in data and computation modeling capability, we can develop graph computation modules with ease. In other words, GE can easily morph into a system supporting a specific graph computation.

UIs

If you want to access SQL databases there are a couple of nice options in the open source land (only a few decades after SQL’s birth)

Database Flow is an open source self-hosted SQL client, GraphQL server, and charting application that works with your database.

Visualize schemas, query plans, charts, and results.

Java app.

A simple and lightweight SQL client desktop/terminal with cross database and platform support.

Unmaintained.

Free multi-platform database tool for developers, SQL programmers, database administrators and analysts. Supports all popular databases: MySQL, PostgreSQL, MariaDB, SQLite, Oracle, DB2, SQL Server, Sybase, MS Access, Teradata, Firebird, Derby, etc.

(Java Eclipse app)

Apache Superset is a data exploration and visualization web application.

Superset provides:

Redash consists of two parts: