“[…]Now. Where is it?”
“Where is what?”
“The time machine.”
“You’re standing in it.” said [X].
“How… does it work?” [Y] said, trying to make it sound like a casual enquiry.
“Well, it’s really terribly simple,” said [X], “it works any way you want it to. You see, the computer that runs it is a rather advanced one. In fact it is more powerful than the sum total of all the computers on this planet including – and this is the tricky part – including itself. Never really understood that bit myself, to be honest with you. But over ninety-five per cent of that power is used in simply understanding what it is you want it to do. I simply plonk my abacus down there and it understands the way I use it. I think I must have been brought up to use an abacus when I was a… well, a child, I suppose.
“[R], for instance, would probably want to use his own personal computer. If you put it down there, where the abacus is, the machine’s computer would simple take charge of it and offer you lots of nice user-friendly time-travel applications complete with pull-down menus and desk accessories if you like. Except that you point to 1066 on the screen and you’ve got the Battle of Hastings going on outside your door, er, if that’s the sort of thing you’re interested in.”
[X]’s tone of voice suggested that his own interests lay in other areas.
“It’s, er, really quite fun in its way,” he concluded. “Certainly better than television and a great deal easier to use than a video recorder. If I miss a programme I just pop back in time and watch it. I’m hopeless fiddling with all those buttons.” […]
“You have a time machine and you use it for… watching television?”
“Well, I wouldn’t use it at all if I could get the hang of the video recorder”
On the dark art of persuading the computer to respond intuitively to your intentions, with particular regard to music.
The input-data side of gesture recognition.
In non-musical circles they call this “physical computing”, or “natural user interfaces”, or “tangible computing”, depending upon whom they are pitching to for funding this month.
- I just designed an interesting digital instrument with a bunch of free control parameters.
- I have an interface with a different (usually much smaller) number of control parameters.
- there is no obvious “best”, or even immediately intuitive, mapping from one to the other
How do I plug these into each other in an intelligible, expressive way so as to perform using them?
This question is broad, vague and and comes up all the time.
Ideas I would like to explore:
Interpolating between interesting parameters using arbitrary regression. Rebecca Fiebrink’s Wekinator does this using simple neural networks.
Constructing basis vectors in some clever way, e.g. sparse basis dictionaries
constructing quasi-physical models that explore parameter space in some smart, intuitive way, e.g. swarm systems, Hamiltonian models
doing basic filtering of generic UI signals
- leaky integration
- differentiation (smoothed)
- thresholding and Schmitt-triggering
- constructing random IIR convolutional filters and harnessing for control. How do you select the best ones, though? What is the right objective function?
- sparse correlation
- physical models as input
- random sparse physical models as input
- annealing/Gibbs distribution style process
- Der/Zahedi/Bertschinger/Ay-style information sensorimotor loop
And related stuff.
Copula are an intuitive way to relate 2 or more (monotonically varying?) values by their quantiles.
The most basic one is Gaussian, where the parameter of the copula is essentially the correlation. For various reasons, I’m not keen on this in practice; I do not have time to go into my intuitions as to why it is so, but Gaussian tails“feel” wrong for control. Student-t, perhaps?
UI design ideas
- circular sequencer
- gesture classifiers
- accelerometer harvest for smartphone
libmapper bundles together UI signals and provides a discovery protocol; libraries in C right now; also there is python, and some puredata/maxMSP implementation
But if you were doing that, why not use some IoT tools and benefit from greater brainshare?
musicbricks include gestural controllers and syncing in their various open source umbrella projects.
The Autobahn project:
provides open-source implementations of the The WebSocket Protocol and The Web Application Messaging Protocol (WAMP) network protocols.
WebSocket allows bidirectional real-time messaging on the Web and WAMP adds asynchronous Remote Procedure Calls and Publish & Subscribe on top of WebSocket.
WAMP is ideal for distributed, multi-client and server applications, such as multi-user database-driven business applications, sensor networks (IoT), instant messaging or MMOGs (massively multi-player online games).
Luis Lloret’s OSMID aims to provide a lightweight, portable, easy to use tool to convert MIDI to OSC and OSC to MIDI.
Mark Francombe’s browser MIDI/OSC converter, MIDI MESSAGE GENERATOR.
There are various handy GUI frameworks designed for musical control.
supercollider.js does this and much more.
OSC-JS exists, bridging websockets to OSC, but doesn’t look as maintained as osc.js. Are there others?
legato is a small node.js library written in coffeescript, but that doesn’t really matter. legato is designed to let you create beautifully simple connections between devices and software, regardless of environment.
Reasonably comprehensive support for MIDI with decent timing in Löve2d.
- mmExtensions by Martin Marier has the best-designed preset interpolation system I have seen, all so that its creator may plug a networked bath sponge into clarinet recordings.
iPad, Android, Windows tablet…
For Windows tablet, xotopad.
For iOS/Android, Touchosc, Lemur…
For anything +Ableton, yeco.
The classic depth camera is the Kinect. More-open depth-camera: Orbbec3d
Calibration is tricky; Rulr attempts to solve this in an open-source, general way. (Rulr docs). openkinect does Kinect.
myo is a wristband sensor that measures your muscles directly using EMG. Similar: the XTH using MMG - “which captures motion, direction and orientation sensors (integrated in a 9-DoF IMU) and muscle sound (also known as mechanomyogram or MMG)”
Infrared hand tracker. In my experience, not really stable enough for on-stage use, (needs better Kalman filtering) but gee it’s small and portable.
Keith McMillan fancy controllers
e.g. QUNexus, multi-dimensional midi controllers. (Ongoing project – find out how to work them in Bitwig.)
Midi fighter twister
16 knobs in a grid what is not to love? The refurbs are affordable.
[Turns] everyday objects into touchpads and combine them with the internet. It’s a simple Invention Kit for Beginners and Experts doing art, engineering, and everything in between
Hmm. I’m not sold on this, as it’s a rather expensive way of getting 1-dimensional controllers out of $2 contact mics, and you could do a lot more with this if you were clever. Nice if you are short of time and quirk, but not short of money,
Misc ghetto options
Wiimote should be a normal HID device, but has nasty sharp edges. So you avoid them using alternate libraries:
wiiuse is a library written in C that connects with several Nintendo Wii remotes. Supports motion sensing, IR tracking, nunchuk, classic controller, Balance Board, and the Guitar Hero 3 controller. Single threaded and nonblocking makes a light weight and clean API.
OS X mapper Darwiinremote.
macOS driver wjoy
osculator is a commerical product which does this; it’s pretty good.
libcwiid seems to be Linux-happy? But it’s a naked C library and apparently threading-tricky. Has a python interface.
Portable wireless routers
Human Instruments does good accessibile interface work.
More-or-less working since the 1980s; still the best idea, if you can live with 7-bit scalars as your lingua franca. See MIDI.
A short-lived project from the 1990s to produce a more flexible protocol than MIDI. Insinuated its way into many projects before death, and still haunts them Because it is more flexible than MIDI, it is sometimes discussed as if it were the apotheosis of protocols, as opposed to an incremental improvement on MIDI with many debilities of its own, and much narrower support.
Stateless protocol designed to support UDP – and therefore it’s a one way
protocol. No question-and-response here. Therefore you always end up re-inventing TCP if you want to do 2 way communication. Which presumably you do, or you’d be using MIDI.
because of UDP assumption, bad for transferring large data, e.g. audio
supports strings but only ASCII, so you can’t even work around (2) by transmitting filenames, whcih are not reliably ASCII on modern computers.
supports lots of data types, including nested data trees and time execution, but only a partial subset is implemented by most software apart from, AFAICT, supercollider.
has a rather unpleasant addressing protocol, where messages can be addressed to paths including wildcards. But the sender may use wildcards, and the receiver may not, which is backwards for all practical use-cases that I at least encounter.
Doesn’t guarantee delivery, due to UDP assumption. When you mention this, supercollider fans tell you that you CAN in fact use TCP instead. Which you can, but only if you are using supercollider, which rather diminishes the “universal ultimate controller protocol for everything” argument.
What your keyboard talks to your computer. 🚧
Sundry bluetooth protocols