Beads is a software library written in Java for realtime audio. It was started by Ollie Bown in 2008. It is an open source project and has been developed with support from Monash University in Melbourne, via the Centre for Electronic Media Art‘s ARC Discovery Grant Project “Creative Ecosystems”, and a Small Grant for Early Career Researchers from the Faculty of Information Technology. Beads contributors includes Ollie Bown, Ben Porter and Benito.
For more info and requests, contact Ollie, who resides at the domain icarus.nu, or check out the discussion group.
If you are doing academic work and would like to cite the use of Beads in your work, there is no paper explicitly on Beads, but you could reference this one, which is one of the first and primary uses of Beads.
Using Beads with Eclipse
Beads is just a Java library, so in order to know Beads you have to know Java. A good way to get started is using the IDE Eclipse:
- Download and install Eclipse. Eclipse is a free Integrated Development Environment (IDE) for Java (or use commandline tools or whatever IDE takes your fancy).
- Download the Beads Library.
- Follow the instructions in the README.txt file inside the project folder.
Using Beads with Processing
Alternatively, there’s an even quicker-fire way to experience Java, via Processing:
- Download and install Processing. Processing is a kind of simplified version of Java for graphics and multimedia.
- Download the Beads Processing Library.
- Follow the instructions in the README.txt file inside the tutorial.
NEW: Be sure to check out the Beads book, “Sonifying Proessing”, by Evan Merz here.
Latest version dated 20140318:
- Beads Library (contains Library, JavaDocs and Tutorial, loads as an Eclipse project).
- Beads Library for Processing (contains Library, JavaDocs and Tutorial).
Older versions are listed by date here.
Source code on Github: https://github.com/orsjb/beads. The source is configured as an Eclipse project.
Browse the latest JavaDocs here.
Read a downloadable tutorial for Beads (by Evan Merz) here.
Who, What, Where, Why?
Beginning with what and why, then where and who.
What Beads is is a library for programming audio in Java for musical and other creative sound applications.
Why? It’s hard to argue that there’s nothing else out there that does the same thing, but there’s nothing else out there that does exactly the same set of things in the same way. Beads is pure Java, meaning it’s easy to work at every level of an application from the same set of sources, and it uses a simple set of framework classes to make quick development of musical applications as easy as possible, and it’s got some cool features. And above all, why not?
“Where” is easy, because Java is pretty omnipotent, and because Beads is open source. For development purposes, “where” can mean in Eclipse, NetBeans, another IDE, the commandline, or embedded in Processing, MaxMSP or other media environments, and on the web. Beads has a flexible exchangeable audio IO layer so porting it to places besides ordinary desktop Java is fairly straightforward. A longer-term wish is to rid Beads of other JavaSound dependencies and give it solid, self-sufficient audio file IO capabilities.
Who? Anyone who wants to make computer music or audio applications. You will need to be able to program, but you can learn on the job following the Eclipse examples easily on any platform. The following topics assume some Java knowledge, so if you’re getting started, expect to come back to these topics as you go.
The Audio Context
Beads uses the class AudioContext as the first port of call for all audio programming. The AudioContext helps keep audio processing in order, taking care of IO, the audio format of the processing (e.g., sample rate) and the size of buffers that Beads uses internally to calculate audio. Most UGens need an AudioContext at instantiation. In simple circumstances it will suffice to create the default AudioContext. Ultimately, AudioContext will be abstracted and subclasses will be provided to work using a number of different IO solutions (e.g., JavaSound, JJack, RTAudio, JavaME audio, VST). AudioContext has a public field ‘out’ which you plug your output sound into. Once you’re ready to go you can start your AudioContext or set it to run in non-realtime mode.
A UGen (unit generator — terminology borrowed from SuperCollider) is an audio processing unit with zero or more audio input and output channels. UGens can be instantiated and connected to other UGens to create audio chains. UGens can also use other UGens as audio rate controllers for specific parameters. For example, the Gain UGen can have the gain level controlled from a WavePlayer UGen for amplitude modulation, and a WavePlayer UGen can have its frequency controlled by another WavePlayer UGen for frequency modulation. Certain UGens, such as Envelope, Glide and Static, are particularly useful for controlling these audio rate parameters. Finally, a UGen can encapsulate a more complex configuration of other UGens (see audio chains below), by allocating input and output proxies.
Building Audio Chains
UGens don’t do anything until they’re part of an audio chain. An audio chain is simply a set of UGens connected together in some way. To connect a UGen to another UGen, use one of the addInput() methods. You can either connect all of the outputs of a to all of the inputs of b, e.g., b.addInput(a), or connect a specific output to a specific input, e.g., b.addInput(0, a, 3) connects output 3 of a to input 0 of b. UGens are only active (processing audio) when they are connected to the input of another active UGen, and are not paused.
Under the hood, the audio processing scheduler passes its way up the chain of connected UGens beginning with the audio output (the object called ‘out’ in your AudioContext). The scheduler ensures that all active UGens are updated exactly once each time step, with UGens further up the audio chain (upstream) being updated before UGens lower down (downstream). For example, if you connected a WavePlayer object to a Gain object, which you connected to the audio output, then at each time-step the output would cause the Gain to be updated, and the Gain would ask the WavePlayer to update first, before doing its own update (obviously in order to update itself the Gain object needs to know what the output of the WavePlayer object is going to be). Only once the WavePlayer was updated would the Gain be updated. Pausing a UGen (see below) deactivates it, meaning that it will also not forward update requests to UGens further up the audio chain. Killing the Gain object would cause it to be removed from the audio chain, and implicitly the WavePlayer would be removed along with it (see below).
There are some UGens that don’t actually have any outputs, but still need to be connected to an audio chain. For example, Beads has an audio rate Clock which sends messages to listeners at each tick, but doesn’t actually output anything. In such cases, you can add the outputless UGen to the audio chain using the method addDependent(). For example, if you wanted your Clock to always be running, you could call out.addDependent(clock), where ‘out’ is the output provided by your AudioContext. However, your Clock might be temporary and associated with a short segment of music, in which case you could add the Clock somewhere higher up the call chain in such a way that it gets automatically jettisoned along with the rest of that part of the audio chain.
Pausing and Killing UGens
UGens inherit specific behaviour from a parent class called Bead. A Bead can be paused and unpaused, and it can be killed (but it cannot be unkilled). (Beads can also send and receive message, but UGens do not use this functionality unless you design them to). When UGens are paused, they are not updated by the audio chain, and they no longer forward update requests up the audio chain. When UGens are killed, they are automatically removed from the audio chain during the update process (before an update occurs, we check to see whether the UGen is dead, and if so we remove it). When this happens, any UGens further upstream are also implicitly removed, unless they are connected in some other way to an active part of the audio chain. Since Java provides garbage collection, this means that killing a UGen can cause a whole audio chain to be garbage collected.
Beads also have a simple protocol for sending and receiving messages. Certain Beads respond to the method message(). Typically, Beads send messages with themselves as arguments, so for a to send a message to b you would use the command b.message(a) (if you’re not a programmer, this might look like it’s the wrong way around, but it means, “run the command ‘message’ on the object ‘b’, with the information ‘a’). Note however, that like UGens with respect to the audio chain, Beads will not respond to messages while they are paused.
Beads or UGens that have anything to do with timed events will provide a way to add a message listener that will respond that that event.
A typical example of using messages is the Clock object. Whenever a Clock gets to the next tick, it sends a message to any Beads that are registered as listeners to that Clock. Each Bead receives this message with the Clock as its argument (so it can decide how to respond to the fact that it is receiving messages from a Bead of type Clock, and can probe the Clock for information such as what time it is). It is also common to set up chains of messages. For example, a Pattern object can be set up to receive messages from a Clock object, and then forward messages on to other objects.
Another example is an Envelope object, which can be used to generate continuous linear or curved audio rate fades (know as segments). You can assign a listener to any given segment such that when the segment reaches its destination value, the listener will receive a message. This can be used, for example, to kill off a sound once its gain envelope reaches zero (see Triggers below).
To make a Bead that responds to a message, you have to override the messageReceived() method (not the message() method). This is often done inline as a quick and effective way to write some event-dependent behaviour into your Java code.
Beads can be grouped into Bead arrays using the BeadArray class. A BeadArray simply keeps a list of Beads, and forwards any messages to each of the Beads in its list, as long as they are not paused. Although not the default behaviour, BeadArray can also forward the pause, unpause and kill commands to all of its component Beads. Objects such as Clock that are likely to need to send messages to multiple listeners use BeadArray to handle this task. As with UGens in the audio chain, a Bead that is killed will automatically be removed from any BeadArrays the next time those BeadArrays attempt to forward a message to the dead Bead.
Triggers handle specific timed events using the Beads messaging protocol. For example, the KillTrigger class can be used to kill a specific Bead, such as when you want to stop a sound in response to a certain event. Likewise, there is a PauseTrigger and a StartTrigger (start = unpause), and an AudioContextStopTrigger, which stops all audio processing (and terminates the program if no other threads are running). Another useful trigger is the DelayTrigger (it’s actually a UGen, because it uses the audio clock), which waits a certain time and then fires a message. Since triggers are just Beads, their messages can be chained. For example, if you wanted to process 10 seconds of audio in non-realtime, you could set up a DelayTrigger of 10 seconds which was connected to an AudioContextStopTrigger.
A Sample is a segment of audio data, usually associated with an audio file. Samples can be used for playback, or recorded into, using a number of tools such as SamplePlayer and Recorder. Samples can be organised easily using the SampleManager class, which maintains a list of loaded Samples as well as arrangements of those Samples into groups, and is the recommended way to interact with Samples. The quickest way to play back a sound from a file in Beads is to load up the sound using the command “mySample = SampleManager.sample(filename)”, and then to create a SamplePlayer object, with ‘mySample’ as the specified Sample at instantiation time.
By default, when a new Sample is created from an audio file, all of the audio data is loaded into memory. If you’re working with large Samples, the Sample class offers a special mode which only buffers the audio that is required for playback, and holds onto it for a short period. This is much less efficient than playing back audio from memory.
Beads contains a set of tools for audio analysis which are currently under development. At present they are not documented and interested users should look at the source code, which contains a separate source code folder of examples.
Making a new UGen is simply a matter of creating a new class that extends UGen, and overriding the default constructor and the method calculateBuffer(). Within the calculateBuffer() method, you have access to an input 2D array of floats, bufIn[channel][index], and an output 2D array of floats, bufOut[channel][index] where channel refers to the input or output channel, and index refers to the index into the current signal vector. The job of the calculateBuffer() method is to fill up the bufOut array with the right data.
For efficiency, the audio chain is responsible for allocating these input and output buffers at each time step, and you cannot assume that at the beginning of the update the data in the bufOut array is the same as it was at the end of the previous update, since these arrays may get swapped about. However, if you want this you can set the UGen’s OutputInitialisationRegime to RETAIN. In this case, you are responsible for creating the array at instantiation. Likewise, when the UGen is paused, the default behaviour is that its output values are set to zero, but you can also adjust this by setting the UGen’s OutputPauseRegime to RETAIN.
The Beads source code offers many examples of calculateBuffer() methods.
Input and output proxies can be set such that a UGen can operate as a wrapper for a more complex audio chain. For example, a reverb can be built out of a combination of tap delays, gains and filters, and proxies can be used to make sure that the reverb hands over its input arrays to its input proxy UGen, and grabs back its output arrays from the output proxy (which it also updates, triggering the nested audio chain). Proxies are at an experimental stage and aren’t properly documented, however.
Digital signal processing is highly processor intensive, and Java, despite claims that it is as fast as C/C++, is definitely slower in practice when it comes to audio processing. Furthermore, Java’s memory management system has the unfortunate effect that threads can get interrupted whilst the virtual machine does its periodic garbage collection. There are many factors affecting the efficiency and stability of a Beads program and this is just a brief handful of hints. Most obviously, some code is more efficient than other code: different data storage classes have different performance behaviours and are geared towards different tasks, so it helps to familiarise yourself with the Java collections library, and the pros and cons of objects such as ArrayList, LinkedList, Hashtable and so on. Secondly, Java has a number of commandline flags that change its behaviour. Depending on the Java version you’re using, the -server flag might speed things up a great deal (enabling the just-in-time compiler), along with some other flags for controlling the behaviour of the garbage collector. Thirdly, there are different ways to do IO, with different consequences. For example, JJack is better for low-latency audio than JavaSound.
Beads runs an independent thread to handle audio in real-time, meaning that all of the calculateBuffer() methods of all of the UGens in the audio chain are being called in this thread, along with all of the messages that stem from audio rate processes (such as from Clock, DelayTrigger and Envelope). On its own, therefore, Beads won’t encounter any concurrency issues. However, when you introduce a GUI or any other source of events from other threads, concurrency issues could arise.
Some Special Classes
Envelope, Glide and Static
Envelope, Glide and Static are all single-output UGens that are intended to be used to control audio-rate parameters of other UGens. Envelope generates line segments with a given duration, end value and curvature, in response to the command addSegment(). Multiple segments can be concatenated, and the Envelope can also fire a message at the end of a segment if required. Glide is simpler than Envelope. It has a settable glide time, and responds to commands to glide to a new destination value. Static is simpler still. It outputs a static value, which can be changed if necessary. These classes are designed to pause themselves when not changing.
Buffer and BufferFactory
Buffers are used in cases where a single lookup table is needed, such as for oscillators and amplitude windows. For example, WavePlayer uses a buffer to determine the waveform that is being played, and GranularSamplePlayer uses a buffer to determine the amplitude window of each grain. BufferFactories generate Buffers given a buffer size (the default size is 4096). Since it is common for many objects to access the same Buffer, the Buffer class offers a static table that can be used to store Buffers globally by name, and also has readymade static buffers of the most common waveforms.
If you need a custom UGen, it is very easy to generate one inline on the fly. Function provides an even easier approach when processing the output of one or more single-output UGens. Function takes a variable number of UGens as arguments to its constructor, and implementations of Function simply override the method calculate(), which is called every sample (rather than every time-step). Inside the calculate() method, users access the samples from the list of input UGens using an array ‘x’. For example, if you instantiated a new Function with arguments a, b, c (where a, b and c are UGens), then the expression
return x + x / x;
would be equivalent to the operation (a + b / c) on the outputs of the UGens a, b and c.
A Pattern is a set of indexed events, where each index is an integer, and each event is a list of integers. A pattern responds to messages of type IntegerBead. Various Beads, such as Clock, identify themselves as IntegerBeads, where getInt() retrieves the integer value. Patterns can be used for sequencing musical events, but since the output of Pattern is just lists of integers, it is up the user to interpret these integers as events. Pattern can also scale and loop the incoming sequence of integers.
WavePlayer plays back a cyclic waveform, such as a sine wave or square wave, from a Buffer (see above). The frequency of a WavePlayer can be controlled from a UGen (which can be used to create FM synthesis), as can the playback position in the cycle (which overrides the frequency control).
SamplePlayer and GranularSamplePlayer
Simple varispeed, multichannel sample playback can be achieved with SamplePlayer. SamplePlayer’s playback rate and loop start and end points can be controlled from a UGen. SamplePlayer has forwards, backwards and alternating (forwards and backwards) loop modes. Alternatively, the playback position of SamplePlayer can be controlled directly from a UGen (this overrides the loop mode and control of the playback rate). It is possible to choose an interpolation mode for playback: none, linear or cubic. SamplePlayer, and GranularSamplePlayer below, can run in a coarse mode which is more efficient but does not allow UGen (audio rate) control of these parameters.
GranularSamplePlayer behaves just like SamplePlayer but using granular playback. This means that the rate control varies the playback rate without adjusting the pitch. Additional granular parameters can be controlled using UGens: pitch, grain size, grain interval, grain randomness, and grain pan randomness.