The Living Thing / Notebooks :

Julia, the programming language

The hippest way to get your IEEE754 on. Hngh!

Julia: A JIT-compiled language with especial focus on high performance scientific computation.

Why

tl;dr No magic bullet, but a handy arrow for the quiver.

Some of Julia’s community makes ambitious claims about julia being the fastest and bestest thing ever.

Unsurprisingly, julia is no panacea for programming ills, nor will it ever be. It is well designed for numerical computation. In my non-rigorous experiments it seems to do better than other scripting+compilation options, such as cython, on certain tasks. In particular, if you have dynamically defined code in the inner loop it does very well — say, you are doing a Monte Carlo simulation, but don’t know the users’s desired density ahead of time. This is more or less what you’d expect from doing compilation as late as possible rather than shipping a compiled library, but it (usually) involves less messing around with compilation toolchains, platform libraries, ABIs, makefiles etc. OTOH it uses lots of memory, so don’t be running this on your raspberry pi. (Although my colleague Rowan assures me he runs serious julia code on Raspberry Pi all the time, so maybe I need to tighten up my algorithms.)

Julia has its own idiosyncratic frictions. The community process is problematic (see also giving up on Julia). The debugging options are not impressive, and library support is patchy, especially at this early stage. Many of the libraries don’t adhere to Julia’s own best practice.

That said, the idea of a science-users-first JIT language is timely, and Julia is that. Python, for example, has clunky legacy issues in the numeric code and a patchy API, and is ill-designed for JIT-compilation, despite various projects that attempt to facilitate it. Matlab is expensive and nasty for non-numerics, not to mention old and crufty. Lua has some good science libraries and could likely have filled this niche but for reasons that are sociological as well as technical doesn’t have the hipness or critical mass of Julia.

The language

Intros

In order of increasing depth

Here’s something that wasn’t obvious to me: What are Symbols?

Pitfalls, gotcha

Chris Rackauckas mentions 7 Julia gotchas.

Here are some more.

Implementing methods for custom types is clunky

You want to implement a standard interface on your type so you can, e.g. iterate over it, which commonly looks like this:

for element in iterable
    # body
end

or equivalently

iter_result = iterate(iterable)
while iter_result !== nothing
    (element, state) = iter_result
    # body
    iter_result = iterate(iterable, state)
end

Here is a simple example of that: A range iterator which yields every nth element up to some number of elements could look like

julia> struct EveryNth
           n::Int
           start::Int
           length::Int
       end

julia> function Base.iterate(iter::EveryNth, state=(iter.start, 0))
           element, count = state

           if count >= iter.length
               return nothing
           end

           return (element, (element + iter.n, count + 1))
       end

julia> Base.length(iter::EveryNth) = iter.length

julia> Base.eltype(iter::EveryNth) = Int

(If you are lucky you might be able to inherit from AbstractArray.)

It’s weird for me that this requires injecting your methods into another namespace; in this case, Base. That might feel gross, and it does lead to surprising behaviour that this is how things are done, and some fairly mind-bending namespace resolution rules for methods. Importing one package can magically change the behaviour of another. This monkey patch style (called, in julia argot, “type piracy”) is everywhere in Julia, and is clearly marked when you write a package, but not when you use the package. Anyway it works fine and I can’t imagine how to handle the multiple dispatch thing better, so deal with it.

The type system is logical, although it’s not obvious if you are used to classical OOP. (Not a criticism.)

It allocates for and copies array slices per default

You are using a chunk of an existing array and don’t want to copy? Consider using views for slices, they say, which means not using slice notation but rather the view function, or the handy @views macro.

Custom containers are scruffy

If you want fast numerics but you are not sure of your float precision, you should not use AbstractFloat for your argument type definitions (although it works fine for simple types). EDIT: You don’t need to worry about this in method definitions, as Chris Rackauckas points out.

If you do need well-typed containers, the idiomatic way to do this is using parametric types and parametric methods and so-called orthogonal design.

AFAICT a parametric declaration is the goods if, e.g. you want your method to work with an array of Float32, or of Float64, or of Float16, or of Decimal, but not an array which mixes those types. But also it should work on dense arrays, or view of dense arrays, or sparse arrays thereof, right? Then the correct type would look like one of these:

RealMatrixType = Union{AbstractMatrix{F},SparseMatrixCSC{F}} where {F<:Real}
RealVectorType = Union{AbstractVector{F},SparseVector{F}} where {F<:Real}

EDIT: Don’t bother with this kind of function declaration; the compiler will work it out so long as the arguments’ types are sensible. In fact, it’s counter productive, since you might be unnecessarily restrictive. The rule is: let the compiler work out the argument types in function definitions, but you should choose the types in variable definitions (i.e. when you are calling said functions)

Argument syntax is only OK

Keyword arguments exist but do not participate in method dispatch. Keyword arguments are second-class citizens and might make things slow or stupid if you need to specialise your code based on them. So… design your functions around that. This is usually OK with careful thought, but small lapses leads to many irritating helper functions to handle default arguments.

Containers of containers need careful thinking about parametric types

It’s hard to work out what went wrong when you hear Type definition: expected UnionAll, got TypeVar, but you can get there.

Going fast isn’t magical

Don’t believe that wild-eyed evangelical gentleman at the Julia meetup. (and so far there has been one at every meetup.) He doesn’t write actual code; he’s too busy talking about it. You still need to optimize and hint to get your code being all that you want.

Here is a convenient overview of timer functions to help you work out what to optimize.

Active Julia contributor Chris Rackauckas, in the comments of this blog, is at pains to point out that type-hinting is rarely required in function definitions, and that often even threaded vectorisation annotation is not needed, which is amazing. (The manual is more ambivalent about that latter point, but anything in this realm is a mild bit of magic.)

Not to minimize how excellent that is, but bold coder, still don’t get lazy! This of course doesn’t eliminate role that performance annotations and careful code structuring typically play, and I also do not regard it as a bug that you occasionally annotate code to inform the computer what you know about what can be done with it. Julia should not be spending CPU cycles proving non-trivial correctness results for you that you already know. That would likely be solving redundant, possibly superpolynomially hard problems to speed up polynomial problems, which is a bad bet. It is a design feature of julia that it allows you to be clever; it would be a bug if it tried to be clever in a stupid way. You will need to think about array views versus copies. And coding so that, e.g. the broadcast mechanism can do it’s best job optimising requires some thought. (See also, say, blog posts on how you structure algorithms for fancy GPU computing.)

Profiling

The built-in tool is called @time. Note that it can also time arbitrary begin/end blocks.

It’s kind of the opposite of profiling, but you can give use feedback about what’s happening using a ProgressMeter

Near-magical performance annotations

@simd is equivocally endorsed by julia boses as it may or may not be needed to squeeze best possible performance out of the inner loop. @inbounds disables bounds checking, but be very careful with your indexing. @fastmath violates strict IEEE math setup for performance, in a variety of ways some of which sound benign and some not so much.

Optimising for local CPU

Unsubstantiated rumour, (see Chris Rackauckas’ comments for substantiation) is that the following optimises system julia 0.6 installation for local CPU:

include(
    joinpath(
        dirname(JULIA_HOME), "share", "julia", "build_sysimg.jl"
    )
)
build_sysimg(force=true)

It is not needed for Julia 0.7+.

Shunting to GPUs or other fancy architecture

Coding on GPUs for julia is, at least in principle, easy. Thus you can possibly glean some nearly-free acceleration by using cuarrays to push computation onto GPUs. Example. AFAICT the hippest interface is the vendor-neutral GPUArrays which, e.g. (I think) for CUDA will transparently use CUArrays and CUDANative, for e.g. an NVIDIA GPU. You can also invoke them in a vendor-specific way, which looks easy. There is even activist support for google cloud TPU compilation.

Installation

Bareback

Distribution packages are generally veeery old in julia years; if you want to faddish features that are the whole reason you got lured to Julia, download from the main site.

On macOS you could do this to put the binary in your path.

ln -s /Applications/Julia-1.1.app/Contents/Resources/julia/bin/julia ~/bin/julia

JuliaPro

Infrequent but luxurious Julia installs provided through JuliaPro.

I don’t use these since I had early problems with them which turned out to be in the Intel MKL builds. AFAICT this is not even offered any longer?

If so, still avoid it. Vanilla Juno is much more stable, and also smaller, and not noticeably slower on my particular code. In my experience, in any numerical computing, and not just in Julia, you should steer clear of Intel MKL until you absolutely need it.[^because you really want to make your code many gigabytes larger, complicate the licensing, crash more often and also shave a few points off the execution time.]

Workflow

General tips: Look out for handy interactive introspection tricks, e.g. @which tells you which method your function invocation woudl invoke. There are other macros to tell you while file it comes from etc.

Modern

For Julia > 0.7, you should use Revise.jl, and Project environments.

(v1.0) pkg> generate PatternMachine
Generating project PatternMachine:
    PatternMachine/Project.toml
    PatternMachine/src/PatternMachine.jl

shell> cd PatternMachine
/media/dan/SchnittPunkt/Source/PatternMachine

(v1.0) pkg> activate .

julia> import PatternMachine
[ Info: Precompiling PatternMachine [93e276e4-e6da-11e8-1ff8-9f9dfed081b5]

(PatternMachine) pkg> add DSP

Revise auto init is complicated: To .julia/config/startup.jl add

atreplinit() do repl
    try
        @eval using Revise
        @async Revise.wait_steal_repl_backend()
    catch
    end
end

and (they claim) that to julia/config/startup_ijulia.jl you must add

try
    @eval using Revise
catch
end

But for me this leads to the kernel hanging shortly after startup, and the non-IJulia setup is sufficient.

For VS code users .julia/config/startup.jl should purportedly be

atreplinit() do REPL
    @schedule begin
        sleep(0.1)
        try
            @eval using Revise
        catch err
            warn("Could not load Revise.")
        end
    end
end

Here is Erik Engheim’s modern workflow walk-through.

crappy 0.6 workflow

This was awful in Julia 0.6. You are using Julia because it is dynamic and because it is fast, but if you try to use code in a dynamic fashion, in the global namespace, it is no longer fast. Many complicated interactions ensue with the module system, and the recommended workarounds keep changing, and then suddenly it takes 90 seconds to reload all your module dependencies and you may as well have a compile/link cycle and use c++.

IDEs/workbooks

There is a reasonable IDE called juno, built on atom. There is jupyter integration through IJulia.

Both these have their own annoyances.

Juno

Juno is single-window only so you can’t use multiple monitors, and thus you end up squinting at tiny windows of code hidden between all the outputs. Atom’s panes aren’t well-designed for this use-case. For me that means there are a million tiny frictions distracting me from writing code in Juno. I can’t stand it.

If you install Juno as an app, but you also already use Jupyter, there is an additional annoyance in that it hijacks your atom install in a confusing way and mangles your various package preferences. If you love juno, I recommend installing it from within atom via the uber-juno package.

Possibly you can bypass this using homebrew? I didn’t try. But maybe give this a burl:

brew cask install juno

I personally don’t find juno worth the irritations, and never use it. Instead…

IJulia

IJulia isn’t an IDE er se, it’s a sort-of light interactive notebook. You don’t want to edit code in Julia; for that you’ll need a text editor. There are loads of those. I use visual studio code. These two components have have a less annoying, zippier and more stable workflow for my style, but not trying to do everything badly, which is what my experience of Juno reminds me of.

IJulia is also not flawless. For a start, it’s built on Jupyter, which I also don’t love.

For another thing, it does its own overzealous installs per default, profligately installing another copy of jupyter, which you then have to maintain separately. Boring. You can bypass this by commanding it to use the perfectly good jupyter you already have installed:

ENV["JUPYTER"] = "/usr/local/bin/jupyter"
Pkg.add("IJulia")

Now julia appears as a normal kernel in your usual jupyter setup.

If you want particular jupyter kernels for particular julia environments, it is possible: by using the following kernel interpreter template:

julia -e 'import Pkg; Pkg.activate("myproject")' -L /path/to/kernel.jl

Literate coding

There is a package called Weave.jl which is inspired by R’s knitr but compatible with jupyter, which is asl not an IDE, but a good way of invoking reproducible self-documenting code. It could probably be used to fashion a working academic paper if you used the right pandoc hacks.

Pkg.add("Weave")

NB you can also use RMarkdown in julia mode It’s not clear to me how graphing works in this setup.

See also Literate.jl for some similar functionality. ## API, FFIs

See the API list

C

Sort of easy, but there is a tedious need to define the call signature at call time. Survivable.

R

XRJulia:

This package provides an interface from R to Julia, based on the XR structure, as implemented in the XR package, in this repository.

Rjulia:

rjulia provides an interface between R and Julia. It allows a user to run a script in Julia from R, and maps objects between the two languages.

Python

tl;dr

conda create -n conda_jl python
conda activate conda_jl
env CONDA_JL_HOME="$HOME/anaconda3/envs/conda_jl" \
    CONDA_JL_VERSION=3 \
    PYTHON=(which python) \
    JUPYTER=(which jupyter) \
    julia
using Pkg
Pkg.add("IJulia")
Pkg.add("Conda")
Pkg.resolve()
Pkg.build()

PyCall.jl invokes python. Per default it installs yet another conda python, via Conda.jl, and defaults to the elderly python 2.7. This is irritating for various reasons, such as being ancient, and eating your diskspace with yet more versions of the same shit that you already have installed if you use python, but even more awful versions.

Here is how to use an existing version

env PYTHON=(which python3) \
    julia
Pkg.build("PyCall")

Here is how you use Conda, but with python 3:

ENV["CONDA_JL_VERSION"] = "3"
Pkg.build("Conda")

Here is how you use an existing environment

conda create -n conda_jl python
export CONDA_JL_HOME=~/anaconda3/envs/conda_jl
julia -e 'Pkg.build("Conda")'

Either way you should regularly run conda clean to stop your disk filling up with obsolete versions of obscure dependencies for that package you tried out that one time because conda is weird.

conda clean -pt

Pretty printing, formatting

Many things seems to be based on Formatting.jl which apes python string interpolation and this is baseline good idea for formatting numbers. There is a friendly-ish fork, Format.jl which has a slightly differnt philsophy

This package offers Python-style general formatting and c-style numerical formatting (for speed).

Matti Pastell’s incredible Weave.jl supports magical table formatting for Latex/HTML. TexTables.jl is a specialised tables renderer for scientific table output. More specialised, RegressionTables does for statistical tables. Also its launch announcement is a tour of the statistical ecosystem. There are many other formatting libraries, because everyone hates the built-in options, which are a little tedious.

Mustache.jl also has some traction, being a templating syntax used by some other things.

Latexify marks up certain julia objects nicely for math

latexify("x/y") |> print
$\frac{x}{y}$

Data loading/saving/exchange

The names are nearly all self explaining

Debugging and coding

Debugger, Gallium.jl:

This is the julia debugger. Please note that the 0.6 version of this package currently does not support breakpointing, C/C++ debugging or native code inspection. These features are being rebuilt, but were never particularly reliable in prior versions of this package and a cause of instability for the more mature features. In exchange, this package features a significantly more robust pure julia debug prompt, provided by ASTInterpreter2. Please file interpreter issues against that package.

Linter, Lint.jl (also has an atom linter plugin):

Signal processing, esp audio

DSP.jl has been split off from core and now needs to be installed separately.

Numerical tours of signal processing has a Julia edition, showing you how to do some common things to get started.

JuliaAudio processes audio. They recommend PortAudio.jl as a realtime soundcard interface, which looks sorta simple. See rkat’s example of how this works. There are useful abstractions like SampledSignals to load audio and keep the data and signal rate bundled together. Although, as SampledSignal maintainer Spencer Russell points out, AxisArrays might be the right data structure for sample signals, and you could use SampledSignals strictly for IO.

Images.jl processes images.

Linear algebra and arrays

Iteration/Indexing

Julia arrays are 1-indexed, right? No. They have arbitrary indices. Some care is needed to support the (recently introduced) arbitrary indexing. So best practice is not to count from 1:length(A).

If you just need to visit each element, iterate using eachindex.

If you want to iterate by axes… instead of

function mycopy!(dest::AbstractVector, src::AbstractVector)
    length(dest) == length(src) || throw(DimensionMismatch("vectors must match"))
    # OK, now we're safe to use @inbounds, right? (not anymore!)
    for i = 1:length(src)
        @inbounds dest[i] = src[i]
    end
    dest
end

you do

function mycopy!(dest::AbstractVector, src::AbstractVector)
    axes(dest) == axes(src) || throw(DimensionMismatch("vectors must match"))
    for i in LinearIndices(src)
        @inbounds dest[i] = src[i]
    end
    dest
end

Generic multidimensional algorithms are not necessarily intuitive but are possible. There are several methods.

Broadcasting is often simplest and fastests but requires some thought. For some built-in methods you have dot syntax for invoking the broadcast method which is recognisable from, e.g. python. In practice this needs you to use reshape to get the right dimensions lined up.

For some reason there seems to be shame attached to using macros for this, but there is a fairly simple and transparent macro system called [email protected]; See Mike Innes’ explanation.

If you want to avoid macros but you can’t work out how to get your broadcasts to line up, there is the CartesianIndices system, which is compact but fairly confusing, and AFAICT general. Here is the “surprisingly simple” (sic) one-pole filter implementation

function expfiltdim(x, dim::Integer, α)
    s = similar(x)
    Rpre = CartesianIndices(size(x)[1:dim-1])
    Rpost = CartesianIndices(size(x)[dim+1:end])
    _expfilt!(s, x, α, Rpre, size(x, dim), Rpost)
end

function _expfilt!(s, x, α, Rpre, n, Rpost)
    for Ipost in Rpost
        # Initialize the first value along the filtered dimension
        for Ipre in Rpre
            s[Ipre, 1, Ipost] = x[Ipre, 1, Ipost]
        end
        # Handle all other entries
        for i = 2:n
            for Ipre in Rpre
                s[Ipre, i, Ipost] = α*x[Ipre, i, Ipost] + (1-α)*s[Ipre, i-1, Ipost]
            end
        end
    end
    s
end

TODO: discover whether this is as efficient as broadcast as far as operation fusing etc.

You can avoid any of these for a couple of common cases. EllipsisNotation provides compact intuitive syntactic sugar if you wish to get at the first or last axis.

A[..,1] = [2 1 4 5
           2 2 3 6]

A[..,2] = [3 2 6 5
          3 2 6 6]

A[:,:,1] == [2 1 4 5
             2 2 3 6] #true

For iterating over one index of an arbitrary array, mapslices is convenient, or for extracting a slice along an arbitrary index, selectdim.

There is also AxisAlgorithms, which does batch matrix multiplications and inversions for a couple of handy cases.

Arrays are column-major

The first index should change fastest. That is, walk over the last index in your outer loop, and apply operations to slices comprising the other indices.

Multidimensional arrays in Julia are stored in column-major order. This means that arrays are stacked one column at a time. This can be verified using the vec function or the syntax [:]copy_col_row is much faster than copy_row_col because it follows our rule of thumb that the first element to appear in a slice expression should be coupled with the inner-most loop. This is expected because copy_cols respects the column-based memory layout of the Matrix and fills it one column at a time. Additionally,… it follows our rule of thumb that the first element to appear in a slice expression should be coupled with the inner-most loop.

Einstein summation

Einstein summation is a handy way of thinking about lots of tensor operations which is not as esoteric as it sounds, inclduing, e.g. batch matrix multiply. TensorOperations.jl does some optimised einstein summations. Einsum.jl does reputedly more einstein summations but with possibly less optimistaion.

Structured matrices

Chris says

One of the best scientific computing packages is BandedMatrices.jl. It lets you directly define banded matrices and then overloads things like * and \ to use the right parts of BLAS. Compared to just using a sparse matrix (the standard MATLAB/Python way), this is SCREAMING FAST (the QR factorization difference is pretty big). Banded matrices show up everywhere in scientific computing (pretty much every PDE has a banded matrix operator). If it’s not a banded matrix, then you probably have a block-banded matrix, and thank god that @dlfivefifty also wrote BlockBandedMatrices.jl which will utilize the same speed tricks but expanded to block banded matrices (so, discretizations of systems of PDEs).

Matrix Factorisation

NMF.jl contains reference implementations of non-negative matrix factorisation.

Laplacians.jl by Dan Spielman et al is a matrix factorisation toolkit especially for Laplacian (graph adjacency) matrices.

Plotting

The julia wiki includes worked examples using various engines.

A curious feature of many julia toolkits is that they tend to produce SVG graphics per default, which easily become gigantic for even medium-large data sets. For exploratory data inspection you will probably want to disable this, unless you are careful to make sure your datasets are small. Same goes for interactive JS stuff - it’s only interactive if your browser can render it without crashing. Work that out by rendering to PNG before you try the fancy SVG stuff, which will fill up your repo and hard drive.

Gadfly

The aspirational ggplot clone is Gadfly. It’s elegant, pretty, well integrated into the statistics DataFrame ecosystem, but missing some stuff, and has some weird gotchas.

Switch Gadfly from SVG to PNG or similar rasterised format:

First you will need to install the Fontconfig and Cairo bonus libraries

Pkg.add("Gadfly")
Pkg.add("Cairo")
Pkg.add("Fontconfig")

Now you can force PNG output:

draw(
    PNG(150, 150),
    plot(data,  x=:gamma, y=:p, Geom.point, Geom.line)
)

Another weird characteristic is that Gadfly is slow, especially on initial startup; But even after startup, if I histogram a million data points it takes 30 seconds for each plot of that histogram. It saps the utility of an elegant graph grammar if it’s not responsive to your elegant adjustments.

Gadfly is based on a clever declarative vector graphics system called Compose.jl, which is independently useful.

Plots.jl

Plots.jl wraps many other plotting packages as backends, although notably not Gadfly. The author explains this as being because Gadfly does not support 3d or interactivity, although since I want neither of those things in general, especially for publication, this is a contraindication for the compatibilities of our mutual priorities. I have had enough grief trying to persuade various mathematics department that this whole “internet” thing is anything other than a PDF download engine; I don’t need to compromise my print output for animation, of all useless fripperies. We tell you, this “multimedia” thing is a fad, and the punters will be going back to quill and vellum soon enough.

I think you can shoehorn it into print-quality CMYK plots.

As an example, if you want fast plots it wraps GR.jl which in turn wraps GR, a cross-platform visualisation framework:

GR is a universal framework for cross-platform visualization applications. It offers developers a compact, portable and consistent graphics library for their programs. Applications range from publication quality 2D graphs to the representation of complex 3D scenes.[…]

GR is essentially based on an implementation of a Graphical Kernel System (GKS) and OpenGL. […] The GR framework is especially suitable for real-time environments.

Anyway, here is one important tip: if you aren’t rendering graphs for publication output, but for say exploratory data analysis, switch to PNG from SVG because SVG is very large for images with lots of details.

using Plots: gr, default
gr()
default(fmt=:png)

If anything is flakey, you might need to check the GR installation instructions which includes such steps as:

apt install libxt6 libxrender1 libxext6 libgl1-mesa-glx libqt5widgets5

For all its smoothness and speed, Plots is not pretty and its not clear how to make it look pretty, since prettiness is hidden down at the toolkit layer.

Plots also targets Javascript online back ends via1233 Plotly, which is neat. I sadly have no use for this at the moment. As I mentioned, in my department, this “online” nonsense is about as popular as expressing your data through modulated flatulence, since the two are approximately equivalent in terms of our professional performance metrics.

Plots has a rich extensions ecosystem. PlotRecipes and StatPlots use a ”Recipes” system to provide macro-based data-specific plotting tools.

For example, StatPlots causes Plots to have sensible DataFrame support.

Table-like data structures, […] are supported thanks to the macro @df which allows passing columns as symbols.

using StatPlots
using DataFrames, IndexedTables
gr(size=(400,300))
df = DataFrame(a = 1:10, b = 10 .* rand(10), c = 10 .* rand(10))
@df df plot(:a, [:b :c], colour = [:red :blue])

There are many other examples.

Vega

Vega.jl binds to a javascript ggplot-alike, vega. There is a “higher level” binding also called VegaLite.jl.

It’s unclear to me how either of these work with print graphics, and they are both lagging behind the latest version of the underlying Vega library, so I’m nervous of them.

Others

Winston.jl has a brutalist simple approach but seems to go.

Statistics and data analysis

Hayden Klok and Yoni Nazarathy are writing a free Julia Statistics tetxbook which is to my mind a deep introduction to statistics as well as Julia, although a very classical introduction that won’t be fashionable with your Statistical Learning Theory types.

A good starting point is JuliaStats which organisation produces many statistics megapackages, for kernel density estimates, generalised linear models etc. Install them all using Statskit

using StatsKit

gets the whole set.

Data frames are provided by DataFrames.jl, and also DataTables.jl (deprecated, but used by many packages). The two are subtly incompatible in completely boring ways which you can hopefully ignore soon because DataTables is deprecated. For now, use IterableTables.jl to translate where needed between these and many other data sources. One can access DataFrames (And DataTables and SQL databases and streaming data sources) using Query.jl. You can load a lot of the R standard datasets using RDatasets.

using RDatasets
iris = dataset("datasets", "iris")
neuro = dataset("boot", "neuro")

Posterior inference

Another notable product of JuliaStats organisation is Distributions.jl, a probability distribution toolkit providing densities, sampling etc. It is complemented by the nifty Bridge.jl which simulates random processes with univariate indices.

Turing.jl does simulation-based posterior inference, in a compositional model setting.

Turing.jl is a Julia library for (universal) probabilistic programming. Current features include:

DynamicHMC does Hamiltonian/NUTS sampling in a raw likelihood setting.

Possibly it is a competitor of Klara.jl, the Juliastats MCMC.

A straight, if not fancy package for Gaussian Process is Stheno. It asspires to play well with Turing.jl for non-Gaussian likelihood and Flux.jl for deep Gaussian processes. Currently has automatic differentiation problems.

Miscellaneous model fitting

Lasso and other sparse regressions are available in Lasso.jl which reimplements the lasso algorithm in pure julia, GLMNET.jl which wrap the classic Friedman FORTAN code for same. There is also (functionality unattested) an orthogonal matching pursuit one called OMP.jl but that algorithm is simple enough to bang out yourself in an afternoon, so no stress if it doesn’t work. Incremental/online versions of (presumably exponential family) statistics are in OnlineStats. Mixedmodels.jl

is a Julia package providing capabilities for fitting and examining linear and generalized linear mixed-effect models. It is similar in scope to the lme4 package for R.

Differentiating, optimisation, differentiable learning

Optimizing

JuMP support many types of optimisation, including over non-continuous domains, and is part of the JuliaOpt family of confusingly diverse optimizers, which invoke various ub-families of optimizers. The famous NLOpt solvers comprise one such class, and they can additionally be invoked separately.

Unlike NLPT and the JuMP family, Optim (part of JuliaNLSolvers) solves optimisation problems purely inside Julia. It has nieces and nephews such as LsqFit for Levenberg-Marquardt algorithm non-linear least squares fits. Optim will automatically invoke ForwardDiff. (surely that should be switchable to ReverseDiff for large problems?) Assumes mostly unconstrained problems.

Autodiff

The juliadiff project produces ForwardDiff.jl and ReverseDiff.jl which do what I would expect, namely autodiff in forward and reverse mode respectively. ForwardDiff claims to be very advanced.

ForwardDiff implements methods to take derivatives, gradients, Jacobians, Hessians, and higher-order derivatives of native Julia functions

In my casual tests it seems to be a bit slow due to constantly needing to create a new closure with a single argument it and differentiate it all the time. Or maybe I’m doing it wrong? Or maybe most people are not solving e.g. finding many different uptima in sub problems but rather one big optimisatoin.

Julia has an embarrassment of different methods of autodiff and it’s not always clear the comparative selling points of each.

In forward mode (desirable when, e.g. I have few parameters against which to differentiate), when do I use DualNumbers.jl? Probably never; it seems to be deprecated in favour of a similar system in ForwardDiff.jl. OTOH, HyperDualNumbers, for example, promises cheap 2nd order derivatives by generalizing Dual Numbers to HyperDuals. ForwardDiff claims to support Hessians by Dual Duals, which are supposed to be the same as HyperDuals. I am curious which is a faster way of generating Hessians. HyperDualNumbers has a much nicer API, but possibly the ForwardDiff system yields benefits. Look at how HyperDualNumbers handles it, from the homepage example, where we are evaluating derivatives of f at x by evaluating it at hyper(x, 1.0, 1.0, 0.0).

> f(x) = ℯ^x / (sqrt(sin(x)^3 + cos(x)^3))
> t0 = Hyper(1.5, 1.0, 1.0, 0.0)
> y = f(t0)
4.497780053946162 + 4.053427893898621ϵ1 + 4.053427893898621ϵ2 + 9.463073681596601ϵ1ϵ2

The first term is the function value, the coefficients of both ϵ1 and ϵ2 (which correspond to the second and third arguments of hyper) are equal to the first derivative, and the coefficient of ϵ1ϵ2 is the second derivative.

But! How about Zygote.jl? That’s an alternative AD library from the creators of the aforementioned Flux. It operates in reverse mode and does some zany compilation tricks to get extra fast. Has many fancy features including compiling to Google Cloud TPUs. Flux itself does something different for now, using its own specialised reverse-mode autodiff Tracker, but promises to switch transparently to Zygote in the future. In the interim it has many luxurious options, such as defining your own optimised custom derivatives.

Also you can roll your own using the basic diff definitions in DiffRules. There is also the very fancy planned Capstan, which aims to use a tape system to inject forward and reverse mode differentiation into even very hostile code. However it also claims to depend on Julia features that don’t work yet, so don’t hold your breath.

See also XGrad which is slightly different again in some fashion, but I will not divulge how because this looks like choice fatigue to me. Julia’s homoiconicity and introspection makes this a fecund area.

Learning

Let’s put those last two together to do differentiable learning!

The deep learning toolkits have shorter feature lists than the lengthy ones of those fancy python/C++ libraries (e.g. mobile app building, cuDNN-backed optimisations) But maybe the native performance is OK without bells and whistles. Also, no-one runs Julia on their phone anyway.

Tensorflow.jl. (Surely one misses the benefit of Julia by using tensorflow, since there are two different array-processing infrastructures to pass between, and a very different approach to JIT versus pre-compiled execution. Or is it a match made in heaven?)

Flux.jl sounds like a reimplementation of tensorflow-style differentiable programming inside Julia, which strikes me as the way you’d actually do this right, given the end-to-end-optimised design philosophy of julia.

Flux is a library for machine learning. It comes “batteries-included” with many useful tools built in, but also lets you use the full power of the Julia language where you need it. The whole stack is implemented in clean Julia code (right down to the GPU kernels) and any part can be tweaked to your liking.

It’s missing some features of Tensorflow, but has nice dynamism like Pytorch, and include many suprsing and/or unique features. Its GPU support is real, and elegant, but AFAICS more proof-of-concept than production-ready, depending on the supposition that cuarrays can represent all the operations you need and will perform them optimally, and that you don’t need any fancy DNN-specific GPU optimizations. Its end-to-end julia philosophy leads to little twists like DiffEqFlux – see below. which makes Neural ODEs simple despite the famous NeurIPS paper that publicised them finding them to be a giant pain in the arse.

Alternatively, Mocha.jl is a belt-and-braces deep learning thing, with a library of pre-defined layers. Swap out some of the buzzwords of Flux with newer ones, and skip some older ones, and there you are. It doesn’t appear, on brief perusal, to be as flexible as Flux, missing state filters and recurrent nets and many other models that are made less holy by their distance from the platonic ideal of deep learning, the convnet. As such it is not much use to me, despite promising clever things about using CUDA etc. YMMV.

Knet.jl is another deep learning library that claims to show the ease of implementing deep learning frameworks in Julia. Their RNN support looks a little limp, and they are using the less-popular autodiff frameworks, so I won’t be using ’em but point is well taken.

If you are going to do that, why not do something left-field like use the dynamical systems approach to deep learning, popularised by Haber and Ruthotto et al, who have released their models as Meganet.jl. I’m curious to see how they work.

ODEs, PDEs, quadrature

Chris Rauckackas is a veritable wizard with this stuff; just read his blog.

Here is a tour of fun things you can do with stochastic PDEs. DiffEqOperators creates discretizations.

DiffEqFlux works with Flux and claims to make Neural ODEs sort-of simple despite the famous NeurIPS paper that publicised them finding them to be a giant pain in the arse. +1 for julia here.

Graphs

JuliaGraphs supports a number of graph algorithms. It seems to be mostly simple, matrix-ish graph representations.

Laplacians.jl by Dan Spielman et al is an advanced matrix factorisation toolkit for Laplacian (graph adjacency) matrices, i.e. networks viewed as matrices.

A little less vibrant but maybe still good is JuNet which supports a similar set of algorithms, but seems, AFAICT, to concentrate on node-edge representations over matrix representations.

Approximating and interpolating

ApproxFun.jl does Chebychev and Fourier approximations of given functions This is not, at least primarily, a tool for data analysis, but for solving eigenfunction problems and such like using computational Hilbert space methods for functions which are otherwise difficult. In particular, we must be able to evaluate the target functions at the desired point. Interpolations.jl does arbitrary order spline interpolations of mathematical functions, but also data. This enables some very clever tricks, e.g. random sampling of tricky distributions.

Caching

Doing heavy calculations? Want to save time? Caching notionally helps, but of course is a quagmire of bikeshedding.

The Anamemsis creator(s) have opinion about this:

Julia is a scientific programming language which was designed from the ground up with computational efficiency as one of its goals. As such, even ad hoc and “naive” functions written in Julia tend to be extremely efficient. Furthermore, any memoization implementation is liable to have some performance pain points: in general they contain type unstable code (even if they have type stable output) and they must include a value cache which is accessible from the same scope as the function itself so that global objects are potentially involved. Because of this, memoization comes with a significant overhead, even with Julia’s highly efficient AbstractDict implementations.

My own opinion of caching is that I’m for it. Despite the popularity of the “memoize” pattern I’m against that as a rule; I walways end up having to battle the assumptions. Caching logic should, for projects I’ve worked on, be explicit, opt-in, and that should be an acceptable design overhead because I should only be doing it for a few expensive functions.

LRUCache is the most commonly recommended cache. It provides simple, explicit, opt-in, in-memory cache structure. I am unclear how this would perform wih multiple processes or threads.

Memoize provides an automatic caching macro but primitive in-memory caching using plain old dictionaries. AFAICT it is possibly compatible with LRUCache, but weirdly I could not find discussion of that. However, I don’t like memoisation, so that’s the end of that story.

Anamemsis has rather advanced cache decorators and lots of levers to pull to access and play around with caching behaviour.

Caching is especially full-featured with macros, disk persistence and configurable evictions etc. It wants type annotations to be used in the function definitions.

Cleverman Simon Byrne notes that you can roll your own fancy caching using wild julia introspection stunts via Cassette.jl.

Units

People mention that julia can handle physical units in a syntactically pleasant fashion. It’s not clear how, since all the documentation assumes you already know. From what I can tell, you first need to actually install Unitful, which includes some useful units, and in particular Quantity types.

Then you can use units in the following fashion:

$ using Unitful
$ const s = u"s"
$ const minute = u"minute"
$ 5.3s + 2.0minute
125.3 s

See SampledSignals for some example of how you do things with these, such as the following method definitions to handle time-to-index conversion.

toindex(buf::SampleBuf, t::Number) = t
toindex(buf::SampleBuf, t::Unitful.Time) = inframes(Int, t, samplerate(buf)) + 1
toindex(buf::SampleBuf, t::Quantity) = throw(Unitful.DimensionError(t, s))

UIs and servers

Escher.jl goes further, rendering HTML UI widgets etc. UPDATE: AFAICT Bink.jl is the bee’s knees, booting up an Electron shell for web-tech GUIs locally, which you then talk to via WebIO.jl As a bonus, this can also talk to web-apps in the form of Mux.jl which wraps HttpServer.jl does basic http protocol and there is some web framework, Genie.jl.

Various other options are listed in aviks’ stackoverflow answer.

QML.jl magically plugs into Qt5.