The Living Thing / Notebooks :

Julia ML etc

Usefulness: 🔧 🔧 🔧
Novelty: 💡
Uncertainty: 🤪 🤪
Incompleteness: 🚧 🚧

Stats/ML and also DSP in Julia.

Signal processing

DSP.jl has been split off from core and now needs to be installed separately. Also DirectConvolutions has sensible convolution code.

FFTs are provided by AbstractFFTs, which in-principle wraps many FFT implementations. I don’t know if there is a GPU implementation yet, but there for sure is the classic CPU implementation provided by FFTW.jl which uses FFTW internally.

As for how to use these things, Numerical tours of data scienceshas a Julia edition with lots of signal processing conent.

JuliaAudio processes audio. They recommend PortAudio.jl as a real time soundcard interface, which looks sorta simple. See rkat’s example of how this works. There are useful abstractions like SampledSignals to load audio and keep the data and signal rate bundled together. Although, as SampledSignal maintainer Spencer Russell points out, AxisArrays might be the right data structure for sample signals, and you could use SampledSignals purely for IO, and ignore its data structures thereafter.

Images.jl processes images.

QMC

Low discrepancy and other QMC stuff. Mostly I want low discrepancy sequences. There are two options with near identical interfaces; I’m not sure of the differences.

Sobol.jl claims to have been performance profiled:

] add Sobol
using Sobol
s = SobolSeq(2)
# Then
x = next!(s)

QMC.jl:

] add https://github.com/PieterjanRobbe/QMC.jl
using QMC
lat = LatSeq(2)
#then
next(lat)

Matrix Factorisation

NMF.jl contains reference implementations of non-negative matrix factorisation.

Laplacians.jl by Dan Spielman et al is a matrix factorisation toolkit especially for Laplacian (graph adjacency) matrices.

Differentiating, optimisation

Optimizing

JuMP support many types of optimisation, including over non-continuous domains, and is part of the JuliaOpt family of confusingly diverse optimizers, which invoke various sub-families of optimizers. The famous NLOpt solvers comprise one such class, and they can additionally be invoked separately.

Unlike NLOpt and the JuMP family, Optim.jl (part of JuliaNLSolvers, a different family entirely) solves optimisation problems purely inside Julia. It has nieces and nephews such as LsqFit for Levenberg-Marquardt non-linear least squares fits. Optim will automatically invoke ForwardDiff. Assumes mostly unconstrained problems.

Krylov.jl is a collection of Krylov-type iterative method for large iterative linear and least-squares objectives.

Autodiff

Julia is a hotbed of autodiff for technical and community reasons. Such a hotbed that it’s worth discussing in the autodiff notebook.

Closely related, projects like ModelingToolkit.jl blur the lines between equations and coding, and allow easy definition of differentiable or probabilistic programming.

Statistics, probability and data analysis

Hayden Klok and Yoni Nazarathy are writing a free Julia Statistics textbook which seems a thorough introduction to statistics as well as Julia, albeit statistics in a classical frame that won’t be fashionable with either your learning theory or Bayesian types.

A good starting point for doing stuff is JuliaStats which organisation produces many statistics megapackages, for kernel density estimates, generalised linear models etc. Install them all using Statskit:

using StatsKit

Data frames

The workhorse data structure of statistics.

Data frames are provided by DataFrames.jl.

There are some older ones you might encoutner such as DataTables.jl which are subtly incompatible in tedious ways which we can hopefully ignore soon. For now, use IterableTables.jl to translate where needed between these and many other data sources.

One can access DataFrames (And DataTables and SQL databases and streaming data sources) using Query.jl. DataFramesMeta has also been recommended. You can load a lot of the R standard datasets using RDatasets.

using RDatasets
iris = dataset("datasets", "iris")
neuro = dataset("boot", "neuro")

Frequentist statistics

Lasso and other sparse regressions are available in Lasso.jl which reimplements the lasso algorithm in pure Julia, GLMNET.jl which wrap the classic Friedman FORTAN code for same. There is also (functionality unattested) an orthogonal matching pursuit one called OMP.jl but that algorithm is simple enough to bang out oneself in an afternoon, so no stress if it doesn’t work. Incremental/online versions of (presumably exponential family) statistics are in OnlineStats. MixedModels.jl

is a Julia package providing capabilities for fitting and examining linear and generalized linear mixed-effect models. It is similar in scope to the lme4 package for R.

Probabilistic programming

PRobabilistic programming! BAyesian inference considered broadly!

Another notable product of JuliaStats organisation is Distributions.jl, a probability distribution toolkit providing densities, sampling etc. It is complemented by the nifty Bridge.jl which simulates random processes with univariate indices.

Turing.jl does simulation-based posterior inference, in a compositional model setting.

Turing.jl is a Julia library for (universal) probabilistic programming. Current features include:

More recently, and with a bigger splash, Gen

Gen simplifies the use of probabilistic modeling and inference, by providing modeling languages in which users express models, and high-level programming constructs that automate aspects of inference.

Like some probabilistic programming research languages, Gen includes universal modeling languages that can represent any model, including models with stochastic structure, discrete and continuous random variables, and simulators. However, Gen is distinguished by the flexibility that it affords to users for customizing their inference algorithm.

Gen’s flexible modeling and inference programming capabilities unify symbolic, neural, probabilistic, and simulation-based approaches to modeling and inference, including causal modeling, symbolic programming, deep learning, hierarchical Bayesiam modeling, graphics and physics engines, and planning and reinforcement learning.

It has a very impressive talk demonstrating how you would clean data using it.

DynamicHMC does Hamiltonian/NUTS sampling in a raw likelihood setting.

Possibly it is a competitor of Klara.jl, the Juliastats MCMC.

Machine learning

Let’s put the automatic differntiation, the optimizers and the samplers together to do differentiable learning!

The deep learning toolkits have shorter feature lists than the lengthy ones of those fancy python/C++ libraries (e.g. mobile app building, cuDNN-backed optimisations are all less present in julia libraries) But maybe elegance/performance of Julia makes some of those features irrelevant? I for one don’t care about most of those because I’m a researcher not a deployer.

Having said that, Tensorflow.jl gets all the features, becaues it invokes C++ tensorflow. Surely one misses the benefit of Juliathgis way, since there are two different array-processing infrastructures to pass between, and a very different approach to JIT versus pre-compiled execution. Or no?

Flux.jl sounds like a reimplementation of Tensorflow-style differentiable programming inside Julia, which strikes me as the right way to do this to benefit from the end-to-end-optimised design philosophy of Julia.

Flux is a library for machine learning. It comes “batteries-included” with many useful tools built in, but also lets you use the full power of the Julia language where you need it. The whole stack is implemented in clean Julia code (right down to the GPU kernels) and any part can be tweaked to your liking.

It’s missing some features of Tensorflow, but has simple dynamism like Pytorch, and includes compensatory suprising/unique feature combinationss. Its GPU support is real, and elegant, but AFAICS more proof-of-concept than production-ready, relying upon the supposition that CuArrays can represent all the operations I need and will perform them optimally, and that I don’t need any fancy DNN-specific GPU optimizations. This is dubious – for example, it does not have all the FFT operations I want, such as the Discrete Cosine Transform. Its end-to-end Julia philosophy is justifed, IMO, by twists like DiffEqFlux – see below – which makes Neural ODEs sort-of simple to create.

Alternatively, Mocha.jl is a belt-and-braces deep learning thing, with a library of pre-defined layers. Swap out some of the buzzwords of Flux with newer ones, and skip some older ones, and there you are. It doesn’t appear, on brief perusal, to be as flexible as Flux, missing state filters and recurrent nets and many other models that are made less pure by their distance from the platonic ideal of deep learning, the convnet. As such it is not much use to me, despite promising clever things about using CUDA etc. YMMV.

Knet.jl is another deep learning library that claims to show the ease of implementing deep learning frameworks in Julia. Their RNN support looks a little limp, and they are using the less-popular autodiff frameworks, so I won’t be using ’em but point is well taken.

If one were aiming to do that, why not do something left-field like use the dynamical systems approach to deep learning? This neat trick was popularised by Haber and Ruthotto et al, who have released some of their models as Meganet.jl. I’m curious to see how they work.

A straight, if not fancy package for Gaussian Process is Stheno. It aspires to play well with Turing.jl for non-Gaussian likelihood and Flux.jl for deep Gaussian processes. Currently has automatic differentiation problems.

MLJ is a scikit-learn-like pipeline for data analysis in Julia which standardises model composition automates some of the training etc. It has various adaptors for other ML systems via MLJModels.

ODEs, PDEs, SDEs

Chris Rauckackas is a veritable wizard with this stuff; just read his blog.

Here is a tour of fun tricks with stochastic PDEs. There is a lot of tooling for this; DiffEqOperators … does something. DiffEqFlux (EZ neural ODEs works with Flux and claims to make Neural ODEs simple. The implementation of these things in python, for the award-winning NeurIPS paper that made them famous was a nightmare. +1 for Julia here. The neural SDE section is mostly julia; Go check that out.