I do a a lot of data processing; most of the stores are too inconvenient for me. But here are the most convenient for my needs
- Pre-sharded or non-concurrent-write datasets too big for RAM: hdf5. Annoying schema definition needed, but once you’ve done that it’s wonderfully fast for numerical data.
- Pre-sharded or non-concurrent-write datasets: sqlite. Has great tooling, a pity that it’s wasteful for numerical data.
- Pre-sharded or non-concurrent-write datasets between python and R: feather.
- Concurrent but not incessant writes (i.e. I just want to manage my threads): Maybe dogpile.cache or joblib?
- Concurrent frequent writes: redis.
Or maybe I can pipeline my python analysis using blaze?
OK, full notes now:
With a focus on slightly specialised data stores for use in my statistical jiggerypokery. Which is to say: I care about analysis of lots of data fast. This is probably inimical to running, e.g. your webapp from the same store, which has different requirements. (Massively concurrent writes, consistency guarantees, many small queries instead of few large) Don’t ask me about that.
I prefer to avoid running a database server at all if I can; At least in the sense of a highly specialized multi-client server process. Those are not optimised for a typical scientific workflow. First stop is in-process non-concurrent-write data storage e.g. HDF5 or sqlite.
However, if you want to mediate between lots of threads/processes/machines updating your data in parallel, a “real” database server can be justified.
OTOH if your data is big enough, perhaps you need a crazy giant distributed store of some kind? Requirements change vastly depending on your scale.
Unless your data is very very big, this is what you want.
HDF5: (Python/R/Java/C/Fortan/MATLAB/whatever) An optimised and flexible data format from the Big Science applications of the 90s. Inbuilt compression and indexing support for things too big for memory. A good default with wide support, although the table definition when writing data is booooring and it mangles text encodings if you aren’t careful.
is a flexible parallel computing library for analytics. Dask emphasizes the following virtues:
Provides parallelized NumPy array and Pandas DataFrame objects
Enables distributed computing in Pure Python with access to the PyData stack.
Operates with low overhead, low latency, and minimal serialization necessary for fast numerical algorithms
Supports complex and messy workloads
- Scales up
Runs resiliently on clusters with 100s of nodes
- Scales down
Trivial to set up and run on a laptop in a single process
Designed with interactive computing in mind it provides rapid feedback and diagnostics to aid humans
protobuf: (Python/R/Java/C/Go/whatever) Google’s famous data format. Recommended for tensorflow although it’s soooooo boooooooring. A data format rather than a storage solution per se, so if you end using it you’ve still only solved part of the problem and you need a db or caching layer, or at least a file naming and locking convention.
Bonus wrinkle: the default python version is slow - you need to install google’s custom protobuff.
You might also want everything not to be hard. Try prototool.
Why not use Protocol Buffers, or .. ?
Protocol Buffers is indeed relatively similar to FlatBuffers, with the primary difference being that FlatBuffers does not need a parsing/ unpacking step to a secondary representation before you can access data, often coupled with per-object memory allocation. The code is an order of magnitude bigger, too. Protocol Buffers has neither optional text import/export nor schema language features like unions.
Array stores that are not filesystem stores
Time series/Event crunching/Streaming
Redis is nice for heavy write-transactions. you can just run it without setting up your special dedicated server. Convenient for things that are just big enough to fit in your memory but you need to process the shit out of them fast. Easy set up, built-in lua interpreter, very popular so widely compatible.
Influxdb is a database designed for you to query time-series live, by current time, relative age and so on. The sort of thing designed to run the kind of elaborate real time situation visualisation that evil overlords have in holographic displays in their lairs. Comes with free count aggregation and lite visualisations. Haven’t used it, just noting it here.
druid, as used by airbnb, is “a high-performance, column-oriented, distributed data store” that happens to be good at events.
timescaledb is a realtime/time series extension to postgres.
Heroic, by Spotify
Heroic is our in-house time series database. We built it to address the challenges we were facing with near real-time data collection and presentation at scale. At the core are two key pieces of technology are Cassandra, and Elasticsearch. Cassandra acts as the primary means of storage with Elasticsearch being used to index all data. We currently operate over 200 Cassandra nodes in several clusters across the world serving over 50 million distinct time series.
rethinkdb is a database which does push instead of being polled. Recently open-sourced, very fancy pedigree, haven’t used it.
- UNSTRUCTURED DATA
QMiner provides support for unstructured data, such as text and social networks across the entire processing pipeline, from feature engineering and indexing to aggregation and machine learning.
QMiner provides out-of-the-box support for indexing, querying and aggregating structured, unstructured and geospatial data using a simple query language.
- C++ LIBRARY
QMiner is implemented in C++ and can be included as a library into custom C++ projects, thus providing them with stream processing and data analytics capabilities.
Want to handle floppy ill-defined documents of ill-specified possibly changing metadata? Already resigned to the process of querying and processing this stuff being depressingly slow and/or storage-greedy?
You’re looking for document stores!
If you are looking at document stores as your primary workhorse, as opposed to something you want to get data out of for other storage, then you have either
- Not very much data so performance is no problem, or
- a problem.
Let’s assume number 1, which is common.
Mongodb has a pleasant JS api but is not all that good at concurrent storage, so why are you bothering to do this in a document store? If your data is effectively single-writer you could just be doing this from the filesystem. Still I can imagine scenarios where the dynamic indexing of post hoc metadata is nice, for example in the exploratory phase with a data subset?
Couchdb was the pinup child of the current crop of non SQL-based databases, but seems to be unfashionable.
kinto “is a lightweight JSON storage service with synchronisation and sharing abilities. It is meant to be easy to use and easy to self-host. Supports fine permissions, easy host-proof encryption, automatic versioning for device sync.”
So this is probably for the smartphone app version.
lmdb looks interesting if you want a very simple store that just guarantees you can write to it without corrupting data, and without requiring a custom server process. Most efficient for small records (2K)
Long lists of numbers? Spreadsheet-like tables? Wish to do queries mostly of the sort supported by database engines, such as grouping, sorting and range queries? Sqlite if it fits in memory. (No need to click on that link though, sqlite is already embedded in your tool of choice.) TODO: how to write safely to sqlite from multiple processes through write locks. Also: Mark Litwintschik’s Minimalist Guide to SQLite.
If not, or if you need to handle concurrent writing by multiple processes, MySQL or Postgres. Not because they are best for this job, but because they are common. Honestly, though, unless this is a live production service for many users, you should probably be using a disk-backed store.
Maybe you can make this easier using Blaze.
Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar interface to query data living in other data storage systems.
Clickhouse for example is a columnar database that avoids some of the problems of row-oriented tabular databases. I guess you could try that?
crazy giant distributed stores
Ever since google, every CS graduate wants to write one of these. There are dozens of options; you probably need none of them.
Hbase for Hadoop (original hip open source one, no longer hip)
Hypertable Baidu’s open competitor to google internal database
[…]is a networking and distributed transaction layer built atop SQLite, the fastest, most reliable, and most widely distributed database in the world.
Bedrock is written for modern hardware with large SSD-backed RAID drives and generous RAM file caches, and thereby doesn’t mess with the zillion hacky tricks the other databases do to eke out high performance on largely obsolete hardware. This results in fewer esoteric knobs, and sane defaults that “just work”.
Build flexible, distributed systems that can leverage the entire history of your critical data, not just the most current state. Build them on your existing infrastructure or jump straight to the cloud.
See parallel computing.
dogpile.cache is a caching API which provides a generic interface to caching backends of any variety, and additionally provides API hooks which integrate these cache backends with the locking mechanism of dogpile.
It’s well-done, but doesn’t integrate especially smoothly with futures and modern concurrency as seen in tornado
joblib cache looks convenient, but i can’t work out if it’s multi-write safe, or supposed to be only invoked from some master process and thus not need locking
Transparent and fast disk-caching of output value: a memoize or make-like functionality for Python functions that works well for arbitrary Python objects, including very large numpy arrays. Separate persistence and flow-execution logic from domain logic or algorithmic code by writing the operations as a set of steps with well-defined inputs and outputs: Python functions. Joblib can save their computation to disk and rerun it only if necessary.
It looks convenient, and is the easiest mmap-compatible solution I know, but it only supports function memoization, so if you want to access results some other way or access partial results it can get very convoluted unless you can naturally factor your code into function memoizaions.
On that tip, klepto is scientific computation focussed, but doesn’t look as active or seem to provide many compelling selling points, yet.
Graph-tuple oriented processing….
GE is also a flexible computation engine powered by declarative message passing. GE is for you, if you are building a system that needs to perform fine-grained user-specified server-side computation.
From the perspective of graph computation, GE is not a graph system specifically optimized for a certain graph operation. Instead, with its built-in data and computation modeling capability, we can develop graph computation modules with ease. In other words, GE can easily morph into a system supporting a specific graph computation.
If you want to access SQL databases there are a couple of nice options in the open source land (only a few decades after SQL’s birth)
Database Flow is an open source self-hosted SQL client, GraphQL server, and charting application that works with your database.
Visualize schemas, query plans, charts, and results.
datasette provides a read-only Web JSON api for SQLite
A simple and lightweight SQL client desktop/terminal with cross database and platform support.
Free multi-platform database tool for developers, SQL programmers, database administrators and analysts. Supports all popular databases: MySQL, PostgreSQL, MariaDB, SQLite, Oracle, DB2, SQL Server, Sybase, MS Access, Teradata, Firebird, Derby, etc.
(Java Eclipse app)
Apache Superset is a data exploration and visualization web application.
- A wide array of beautiful visualizations to showcase your data.
- A state of the art SQL editor/IDE exposing a rich metadata browser, and an easy workflow to create visualizations out of any result set.
- Out of the box support for most SQL-speaking databases
- [other keywords that only boring bizdev types care about and noone real ever needs]
Redash consists of two parts:
- Query Editor
Think of JS Fiddle for SQL queries. It’s your way to share data in the organization in an open way, by sharing both the dataset and the query that generated it. This way everyone can peer review not only the resulting dataset but also the process that generated it. Also it’s possible to fork it and generate new datasets and reach new insights.
once you have a dataset, you can create different visualizations out of it, and then combine several visualizations into a single dashboard. Currently it supports charts, pivot table and cohorts.