019761700X
019761700X
MUSICIAN
Creating Technology for Music
Series Editor
V. J. Manzo, Associate Professor of Music Technology and
Cognition at Worcester Polytechnic Institute
Eli Fieldsteel
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence in research, scholarship, and education
by publishing worldwide. Oxford is a registered trade mark of Oxford University
Press in the UK and certain other countries.
DOI: 10.1093/oso/9780197616994.001.0001
A B O U T T H E C O M PA N I O N W E B S I T E XIII
INTRODUCTION XV
PART I FUNDAMENTALS 1
CHAPTER 1 CORE PROGRAMMING CONCEPTS 3
1.1 Overview 3
1.2 A Tour of the Environment 3
1.3 An Object-Oriented View of the World 5
1.4 Writing, Understanding, and Evaluating Code 7
1.5 Getting Help 13
1.6 A Tour of Classes and Methods 17
1.7 Randomness 27
1.8 Conditional Logic 29
1.9 Iteration 32
1.10 Summary 34
INDEX 32 1
ACKNOWLEDGMENTS
This book is the result of over a decade of using the SuperCollider programming language for
music composition and experimental sound art, a journey that would not have been possible
without the support of many others. My brother, Nathan, introduced me to SuperCollider
in 2007 and provided the initial spark of interest. During my first year of graduate study in
music composition, my peers L. Scott Price and Ilya Rostovtsev were creating code-based
music that radically expanded the scope of my musical world. In 2010, I took a summer course
with James Lipton and Ron Kuivila at Wesleyan University, which put so many missing pieces
into place and allowed me to properly begin self-teaching. During my graduate studies, my
advisor Russell Pinkston was an incredible teacher and an endless fount of knowledge. The
Companion Code materials that accompany this book include audio samples from musicians
who have enriched my life through collaborations, performances, and recording sessions,
notably flutist Kenzie Slottow, percussionist Adam Groh, and saxophonist Nathan Mandel.
A few close friends, colleagues, and former teachers—Jake Metz, Jon C. Nelson, and Stephen
Taylor—generously volunteered their time to read and provide feedback on parts of the man-
uscript as it took shape. I also owe a great deal to the many gurus and developers of the
SuperCollider community, who were always kind and helpful in online forums when I had
questions. Finally, I am deeply grateful for my wife, Miriam, who is one of the most compas-
sionate and supportive people I have ever met.
Thank you to everyone who helped me become the musician I am today.
ABOUT THE COMPANION
WEBSITE
www.oup.com/us/supercolliderforthecreativemusician
We have created a website to accompany SuperCollider for the Creative Musician: A Practical
Guide. Throughout this book, in-line examples of SuperCollider code appear in chapters,
and additionally, each chapter pairs with one or more large-scale SuperCollider code projects.
Collectively, these examples serve to amplify and augment concepts discussed in the book.
The former (Code Examples) focus on bite-size ideas, while the latter (Companion Codes)
explore broader topics in greater detail. All these examples are available on this website.
When Companion Code files are referenced in the text, they are accompanied with Oxford’s
symbol .
The purpose of this website is to foster a hands-on learning approach, by enabling
the reader to quickly explore code on their own, while avoiding the burden of having to
copy code by hand into a computer. When engaging with a specific chapter, the reader is
encouraged to download and open the corresponding Code Examples and Companion Codes
in SuperCollider, so that they are readily accessible whenever the urge to experiment arises.
INTRODUCTION
Welcome to SuperCollider for the Creative Musician: A Practical Guide. I hope you’ll find this
book rewarding, ear-opening, and fun.
A little bit about myself: I’ve been a musician for most of my life. I took piano lessons
when I was young, learned other instruments throughout middle school and high school, and
became seriously interested in music composition when I was about 15 years old. At the same
time, math and computer science were always among my strongest aptitudes. I took a handful
of college math courses during my last year of high school, and I had studied the manual for
my TI-83 calculator to the point where I was able to start coding semi-complex programs on
my own. I dabbled in commercial audio software (GarageBand, mostly) in high school and
college, but it wasn’t until I’d started graduate studies in music composition that I began to
realize the depth to which music and computational thinking could be artfully combined.
Discovering SuperCollider and seeing/hearing what it could do opened creative doors that
I didn’t even know existed. Getting it to do what I wanted, however . . . was a different matter.
I’d never had any formal training in computer programming beyond a few spontaneously
taken college classes, and as coding platforms go, SuperCollider is quirky, so it was a slow up-
hill climb for me. After several years, as I finally began feeling competent and self-sufficient,
I began taking notes on things I wish I’d known from day one. These notes evolved into a sort
of curriculum, which manifested as an ongoing series of video tutorials published on YouTube,
and now, this book.
Why use code to make music? Under the hood, all audio software is built with code, but
often presented to the user as a rich graphical interface layer meant to provide an intuitive
means of control, helping new users start making sounds quickly and easily, without really
needing to know much about the underlying principles. It’s wonderful that these tools exist,
but for users wanting to get their hands deeper into the machine to do things that are more
precise or customized, these tools can feel restrictive because the inner workings are “con-
cealed” behind a thick graphical user interface. Programs like SuperCollider, which allow the
user to work directly with code, provide far more direct control, power, and flexibility over the
types of sounds that happen, when they happen, and how they happen. This book doesn’t just
present a series of cool sounds and techniques, it also attempts to unpack and dissect impor-
tant details, show alternatives, illustrate common misunderstandings, and contextualize these
techniques within a broader creative framework.
This book, and other teaching materials I’ve published, exist with a central goal of
simplifying and accelerating the learning process for anyone who’s recently discovered
SuperCollider and feels a similar spark. Having taught music technology in higher education
for several years, I’ve encountered several types of students for whom I think this book would
be valuable. As the title suggests, it’s a structured tutorial and reference guide for anyone
wanting to create electronic music or experimental sound art with SuperCollider, and assumes
readers fall somewhere on the continuum between musician and computer programmer.
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.001.0001
xvi Introduction
Maybe you’re a seasoned musician with a budding interest in coding, or maybe you’re a soft-
ware developer who creates electronic music as a side hustle. Whatever the case, if you have
some familiarity with music and code—even just a little of each—you should be able to use
this book to pick up the basics and get some new creative projects off the ground. Keep in
mind, however, that this book isn’t a formal substitute for an introduction to programming,
digital audio principles, or music theory. These are very broad topics, beyond the scope of this
publication. They are selectively discussed where relevant, but the overall focus of this book
remains on using SuperCollider to develop creative audio projects.
This book divides into three parts: Part I covers the fundamentals of using the language,
expressing ideas through code, writing basic programs, and producing simple sounds. Part II
focuses on common practice techniques, including synthesis, sampling, sequencing, signal
processing, external controllers, and graphical user interface design. In short, this second part
details the “ingredients” that make up a typical project. Part III brings it all together and
offers concrete strategies for composing, rehearsing, and performing large-scale projects.
It’s hard to overstate the value of hands-on learning, especially for a sound-oriented coding
language. To this end, each chapter includes numerous in-line SuperCollider code examples,
and each chapter also pairs with one or more “Companion Code” files, which explore chapter
topics in significantly greater depth. All of the Code Examples and Companion Codes are
hosted on a companion website and referenced in the text with the following icon:
Use these resources! As you read, you’ll (hopefully) get the urge to experiment. When you do,
download these files, study their contents, and play around with them. These files exist to
provide immediate, seamless access, avoiding the need to manually copy code from the book.
Learning SuperCollider is a bit like learning a foreign language; reading a textbook is certainly
a good idea, but if you want to become fluent, there’s no proper substitute for immersing your-
self in the language. You’ll make some mistakes, but you’ll also make important discoveries.
It’s all part of the learning process!
Before you begin reading, download the SuperCollider software, which (at the time
of writing) is available at supercollider.github.io. You may also want to visit/bookmark the
SuperCollider project page on GitHub (github.com/supercollider/supercollider), which includes
a wiki, links to tutorials, and many other resources. I recommend working with a quality pair
of headphones or loudspeakers, although built-in laptop speakers will work in a pinch. A good
quality microphone and audio interface can be useful for exploring topics in Chapter 6, but
there’s plenty to keep you busy if you don’t have access to these things, and a built-in laptop
microphone will suffice for basic exploration. A standard MIDI keyboard controller will also
be helpful (though not essential) for exploring some of the code examples in Chapters 7–8.
Finally, as you work through this book, keep an open mind and be patient with yourself.
There’s a lot to learn and it won’t happen overnight, but with practice and dedication, you’ll
find that an amazing new world of sound will begin to reveal itself, which can be an incredibly
exciting and liberating experience. Onward!
PA R T I
FUNDAMENTALS
In Part I, we will explore fundamental principles and techniques meant to help you begin
using SuperCollider at a basic level. Specifically, after completing these two chapters, you
should be able to write simple programs and play simple sounds.
CHAPTER 1
1.1 Overview
This chapter covers the essentials of programming in SuperCollider, such as navigating the
environment, getting acquainted with basic terminology, understanding common data types,
making simple operations, and writing/running simple programs. The goal is to become fa-
miliar with things you’ll encounter on an everyday basis and develop a foundational under-
standing that will support you through the rest of this book and as you embark on your own
SC journey.
Keyboard shortcuts play an important role in SC, as they provide quick access to common
actions. Throughout this book, the signifier [cmd] refers to the command key on macOS sys-
tems, and the control key on Windows and Linux systems.
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.003.0001
4 SuperCollider for the Creative Musician
The centerpiece of the SC environment is the code editor, a large blank area that serves as your
workspace for creating, editing, and executing code. Like a web browser, multiple files can be
open simultaneously, selectable using the tab system that appears above the editor. Code files
can be saved and loaded as they would be in any text editing or word processing software. SC
code files use the file extension “scd” (short for SuperCollider document). The code editor is
surrounded by three docklets: the post window, the help browser, and the documents list. The
post window is where SC displays information for the user. Most often, this is the result of
evaluated code, but may also include error messages and warnings. The help browser provides
access to documentation files that detail how SC works. The documents list displays names
of files that are currently open, offering a handy navigation option if many documents are
open at once. Altogether, these and other features constitute the SC Integrated Development
Environment, commonly known as the IDE. The IDE serves as a front-end interface for the
user, meant to enhance workflow by providing things like keyboard shortcuts, text colorization,
auto-completion, and more. The preferences panel, which is shown in Figure 1.2, is where some
of these customizations can be made. The preferences panel is accessible in the “SuperCollider”
drop-down menu on macOS, and the “Edit” drop-down menu on Windows/Linux.
The IDE is one of three components that make up the SC environment. In addition,
there is the SC language (often called “the language” or “the client”) and the SC audio server
(often called “the server”). The technical names for these components are sclang and scsynth,
respectively, and despite being integral parts of the environment, they remain largely invisible
to the user.
The language is home to a library of objects, which are the basic building blocks of
any program. It also houses the interpreter, a core component that interprets and executes
C h a p t er 1: C o r e Pr o g r a mmi n g C o n c ep ts 5
code, translating it into action. The interpreter automatically starts when the IDE is
launched, and a small status box in the lower-right corner of the IDE displays whether
the interpreter is active. SC is an interpreted language, rather than a compiled language.
In simple terms, this means that code can be executed selectively and interactively, on
a line-by-line basis, rather than having to write the entirety of a program before run-
ning it. This is especially useful because SC is regularly used for real-time creative audio
applications. In some musical scenarios (improvisation, for example), we won’t know all
the details in advance, so it’s convenient to be able to interact with code “in the moment.”
This dynamism is featured in examples throughout this book (many of which are broken
into separate chunks of code) and is on full display in the last chapter of this book, which
discusses live coding.
The audio server is the program that handles calculation, generation, and processing of
audio signals. Colloquially, it’s the “engine” that powers SC’s sound capabilities. It cannot be
overstated that the server is fully detached and fundamentally independent from the language
and IDE. The language and the server communicate over a network using a client-server
model. For example, to play some audio, we first evaluate some code in the language. The
language, acting as a client on the network, sends a request to the server, which responds by
producing sound. In a sense, this setup is like checking email on a personal computer. An
email client gives the impression that your messages are on your own machine, but in reality,
you’re a client user on a network, sending requests to download information stored on a re-
mote server.
Often, the language and server are running on the same computer, and this “net-
work” is more of an abstract concept than a true network of separate, interconnected
devices. However, the client-server design facilitates options involving separate, networked
machines. It’s possible for an instance of the language on one computer to communi-
cate with an audio server running on a different computer. It’s equally possible for mul-
tiple clients to communicate with a single server (a common configuration for laptop
ensembles), or for one language-side user to communicate with multiple servers. Messages
passed between the language and server rely on the Open Sound Control (OSC) protocol,
therefore the server can be controlled by any OSC-compliant software.1 Most modern
programming languages and audiovisual environments are OSC-compliant, such as Java,
Python, Max/MSP, and Processing, to name a few. Even some digital audio workstations
can send and receive OSC. However, using an alternative client doesn’t typically provide
substantial benefits over the SC language, due to its high-level objects that encapsulate
and automatically generate OSC messages. In addition, the client-server divide improves
stability; if the client application crashes, the server can play on without interruption while
the client is rebooted.
Unlike the interpreter, the server does not automatically start when the IDE is launched
and must be manually booted before we can start working with sound, which is the focus of
the next chapter. For now, our focus is on the language and the IDE.
4.squared;
Then, making sure your mouse cursor is placed somewhere on this line, press [shift]+[return],
which tells the interpreter to execute the current line. You’ll see a brief flash of color over the
code, and the number 16 will appear in the post window. If this is your first time using SC,
congratulations! You’ve just run your first program. Let’s take a step back and dissect what
just happened.
squared(4);
Why choose one style over the other? This is ultimately dictated by personal preference and
usually governed by readability. For example, an English speaker would probably choose
4.squared instead of squared(4), because it mimics how the phrase would be spoken.
The parentheses used to contain the receiver are one type of enclosure in SC. Others
include [square brackets], {curly braces}, “double quotes,” and ‘single quotes.’ Each type of
enclosure has its own significance, and some have multiple uses. We’ll address enclosures in
more detail as they arise, but for now it’s important to recognize than an enclosure always
involves a pair of symbols: one to begin the enclosure, and another to end it. Sometimes, an
enclosure contains only a few characters, while others might contain thousands of lines. If an
opening bracket is missing its partner, your code won’t run properly, and you’ll likely see an
error message.
The semicolon is the statement terminator. This symbol tells the interpreter where one
expression ends and, possibly, where another begins. Semicolons are the code equivalent of
periods and question marks in a novel, which help our brain parse the writing into discrete
sentences. In SC, every code statement should always end with a semicolon. Omitting a semi-
colon is fine when evaluating only a single expression, but omitting a semicolon in a multi-line
situation will usually produce an error.
In the case of either 4.squared or squared(4), the result is the same. On evaluation, the
interpreter returns a value of 16. Returning a value is not the same thing as printing it in the
post window. In fact, the interpreter always posts the value of the last evaluated line, which
8 SuperCollider for the Creative Musician
makes this behavior more of a byproduct of code evaluation than anything else. Returning
a value is a deeper, more fundamental concept, which means the interpreter has taken your
code and digested it, and is passing the result back to you. We can think of the expression
4.squared or squared(4) as being an equivalent representation of the number 16. As such,
we can treat the entire expression as a new receiver and chain additional methods to apply ad-
ditional operations. For instance, Code Example 1.1 shows how we can take the reciprocal (i.e.,
one divided by the receiver) of the square of a number, demonstrated using both syntax styles:
4.squared.reciprocal;
reciprocal(squared(4));
Methods chained using the “receiver-dot-method” style are applied from left to right. When
using the “method(receiver)” syntax, the flow progresses outward from the innermost set of
parentheses.
Unlike squared or reciprocal, some methods require additional information to re-
turn a value. For instance, the pow method raises its receiver to the power of some exponent,
which the user must provide. Code Example 1.2 shows an expression that returns three to the
power of four, using both syntax styles. In this context, we say that four is an argument of the
pow method. Arguments are a far-reaching concept that we’ll explore later in this chapter.
For now, think of an argument as an input value needed for some process to work, such as
a method call. Attempts to use pow without providing the exponent will produce an error.
Note that when multiple items appear in an argument enclosure, they must be separated with
commas.
3.pow(4);
pow(3, 4);
over onto a new line and would also become increasingly difficult to read and understand. For
this reason, we frequently break up a series of operations into multiple statements, written on
separate lines. To do this correctly, we need some way to “capture” a value, so that it can be
referenced in subsequent steps.
A variable is a named container than can hold any type of object. One way to create
a variable is to declare it using a var statement. A variable name must begin with a low-
ercase letter. Following this letter, the name can include uppercase/lowercase letters,
numbers, and/or underscores. Table 1.1 contains examples of valid and invalid vari-
able names.
Valid Invalid
num Num
myValue my#$%&Value
sample_04 sample-04
Variable names should be short, but meaningful, striking a balance between brevity
and readability. Once a variable is declared, an object can be assigned to a variable using
the equals symbol. A typical sequence involves declaring a variable, assigning a value, and
then repeatedly modifying the value and assigning each new result to the same variable,
overwriting the older assignment in the process. This initially looks strange from a “mathe-
matical” perspective, but the equals symbol doesn’t mean numerical equality in this case; in-
stead, it denotes a storage action. In Code Example 1.3, the expression num = num.squared
could be translated into English as the following command: “Square the value currently
stored in the variable named ‘num,’ and store the result in the same variable, overwriting the
old value.”
To evaluate a multi-line block, we need to learn a new technique and a new key-
board shortcut. The code in Code Example 1.3 has been wrapped in an enclosure of
parentheses, each on its own line above and below. Here, we’re already seeing that a
parenthetical enclosure serves multiple purposes. In this case, it delineates a multi-line
block: a modular unit that can be passed to the interpreter with a single keystroke.
Instead of pressing [shift]+[return], place the cursor anywhere inside the enclosure and
press [cmd]+[return].
(
var num;
num = 4;
num = num.squared;
num = num.reciprocal;
)
10 SuperCollider for the Creative Musician
Also note that in this context, the presence of return characters (rather than
semicolons) determines what constitutes a “line” of code. For example, despite
containing multiple statements, the following example is considered a single line, so
either [cmd]+[return] or [shift]+[return] will evaluate it:
4
.squared;
Variables declared using a var statement are local variables; they are local to the evaluated code
of which they are a part. This means that once evaluation is complete, variables created in this
way will no longer exist. If you later attempt to evaluate num by itself, you’ll find that it not
only has lost its value assignment, but also produces an error indicating that the variable is un-
defined. Local variables are transient. They are useful for context-specific cases where there’s
no need to retain the variable beyond its initial scope.
If we need several local variables, they can be combined into a single declaration. They
can also be given value assignments during declaration. Code Example 1.4 declares three
variables and provides initial assignments to the first two. We square the first variable, take
the reciprocal of the second, and return the sum, which is 49.2.
C h a p t er 1: C o r e Pr o g r a mmi n g C o n c ep ts 11
(
var thingA = 7, thingB = 5, result;
thingA = thingA.squared;
thingB = thingB.reciprocal;
result = thingA + thingB;
)
TIP.RAND(); N
EGATIVE NUMBERS AS DEFAULT
VARIABLE ASSIGNMENTS
If a variable is given a negative default value during declaration, there’s a potential pit-
fall. The negative value must either be in parentheses or have a space between it and
the preceding equals symbol. If the equals symbol and the minus symbol are directly
adjacent, SC will mistakenly interpret the pair of symbols as an undefined operation.
This same rule applies to a declaration of arguments, introduced in the next section.
Of the following three expressions, the first two are valid, but the third will fail.
What if we want to retain a variable, to be used again in the future as part of a separate
code evaluation? We can use an environment variable, created by preceding the variable name
with a tilde character (~). Alternatively, we can use one of the twenty-six lowercase alpha-
betic characters, which are reserved as interpreter variables. Both environment and interpreter
variables can be used without a local declaration, and both behave with global scope, that is,
they will retain their value (even across multiple code documents) as long as the interpreter
remains active. Interpreter variables are of limited use, since there are only twenty-six of them
and they cannot have longer, more meaningful names. But they are convenient for bypassing
declaration when sketching out an idea or quickly testing something. Environment variables
12 SuperCollider for the Creative Musician
are generally more useful because we can customize their names. The two blocks of code in
Code Example 1.5 are each equivalent to the code in Code Example 1.3, but in either case, the
variable will retain its assignment after evaluation is complete.
(
n = 4;
n = n.squared;
n = n.reciprocal;
)
(
~num = 4;
~num = ~num.squared;
~num = ~num.reciprocal;
)
(
~num = 4;
~num = ~num.squared.postln;
~num = ~num.reciprocal;
)
1.4.4 COMMENTS
A comment is text that the interpreter ignores. Comments are typically included to provide
some information for a human reader. The imagined reader might be someone else, like a
friend or collaborator, but it also might be you! Leaving notes for yourself is a good way to
jog your memory if you take a long break from a project and don’t quite remember where you
left off.
C h a p t er 1: C o r e Pr o g r a mmi n g C o n c ep ts 13
There are two ways to designate text as a comment. For short comments, preceding text
with two forward slashes will “comment out” the text until the next return character. To create
larger comments that include multiple lines of text, we enclose the text in another type of en-
closure, beginning with a forward slash–asterisk, and ending with an asterisk–forward slash.
These styles are depicted in Code Example 1.7.
(
/*
this is a multi-line comment, which might
be used at the top of your code in order
to explain some features of your program.
*/
var num = 4; // a single-line comment: declare a variable
num = num.squared;
num = num.reciprocal;
)
1.4.5 WHITESPACE
Whitespace refers to the use of “empty space” characters, like spaces and new lines. Generally,
SC is indifferent to whitespace. For example, both of the following statements are considered
syntactically valid and will produce the same result:
4.squared+2;
4 . squared + 2;
The use of whitespace is ultimately a matter of taste, but a consensus among programmers
seems to be that there is a balance to be struck. Using too little whitespace cramps your code
and doesn’t provide enough room for it to “breathe.” Too much whitespace, and code spreads
out to the point of being similarly difficult to read. The whitespace choices made throughout
this book are based on personal preference and years of studying other programmers’ code.
and individual help files for every object.3 It includes a search feature, as well as browse option
for exploring the documentation by category. It’s a good idea to spend some time browsing to
get a sense of what’s there. Keep in mind that the documentation is not structured as a com-
prehensive, logically sequenced tutorial (if it were, there’d be less need for books such as this);
rather, it tends to be more useful as a quick reference for checking some specific detail.
There are two keystrokes for quickly accessing a page in the help browser: [cmd]+[d]will
perform a search for the class or method indicated by the current location of the mouse cursor,
and [shift]+[cmd]+[d] invokes a pop-up search field for querying a specific term. If there is an
exact class match for your query (capitalization matters), that class’s help file will appear. If
not, the browser will display a list of results that it thinks are good matches. Try it yourself!
Type “Integer” into the editor, and press [cmd]+[d] to bring up the associated help file.
The structure of a class help file appears in Figure 1.3. In general, these files each begin
with the class name, along with its superclass lineage, a summary of the class, and a handful
of related classes and topics. Following this, the help file includes a detailed description, class
methods, instance methods, and often some examples that can be run directly in the browser
and/or copied into the text editor.
C h a p t er 1: C o r e Pr o g r a mmi n g C o n c ep ts 15
FIGURE 1.3 The structure of a class help file. Vertical ellipses represent material omitted for clarity
and conciseness.
16 SuperCollider for the Creative Musician
There are many methods that perform mathematical operations with numbers. Those that
perform an operation involving two values are called binary operators, while those that per-
form an operation on a single receiver, like taking the square root of a number, are called unary
operators. Most operations exist as method calls (e.g., squared), but some, like addition and
multiplication, are so commonly used that they have a direct symbolic representation (imagine
if multiplication required typing 12.multipliedBy(3) instead of 12 * 3). Descriptive lists of
commonly used operators appear in Tables 1.2 and 1.3. A more complete list can be found in
a SC overview file titled “Operators.”
Method Symbolic
Name Description
Usage Usage
Addition N/A x + y;
Subtraction N/A x - y;
C h a p t er 1: C o r e Pr o g r a mmi n g C o n c ep ts 19
Method Symbolic
Name Description
Usage Usage
Multiplication N/A x * y;
Division N/A x / y;
“Order of operations” refers to rules governing the sequence in which operations should
be performed. In grade school, for example, we typically learn that in the expression 4 + 2
* 3, multiplication should happen first, and then addition. SC has its own rules for order of
operations, which may surprise you. If an expression includes multiple symbolic operators,
SC will apply them from left-to-right, without any precedence given to any operation. If an
expression includes a combination of method calls and symbolic operators, SC will apply the
method calls first, and then the symbolic operators from left-to-right. Parenthetical enclosures
have precedence over method calls and symbolic operators (see Code Example 1.8). In some
cases, it can be good practice to use parentheses, even when unnecessary, to provide clarity
for readers.
1.6.2 STRINGS
A string is an ordered sequence of characters, delineated by an enclosure of double quotes. A string
can contain any text, including letters, numbers, symbols, and even non-printing characters like
tabs and new lines. Strings are commonly used to represent text and may be used to provide
names or labels for certain types of objects. Several examples appear in Code Example 1.9.
Certain characters need special treatment when they appear inside a string. For example,
what if we want to put a double quotation mark inside of a string? If not done correctly, the
internal quotation mark will prematurely terminate the string, likely causing the interpreter
to report a syntax error. Special characters are entered by preceding them with a backslash,
which in this context is called the escape character. Its name refers to the effect it has on the
interpreter, causing it to “escape” the character’s normal meaning.
// a typical string
"Click the green button to start.";
We know the plus symbol denotes addition for integers and floats, but it also has meaning for
strings (the usage of one symbol for multiple purposes is called overloading and is another ex-
ample of polymorphism). A single plus will return a single string from two strings, inserting a
space character between them. A double plus does nearly the same thing but omits the space.
These and a handful of other string methods appear in Code Example 1.10.
1.6.3 SYMBOLS
A symbol is like a string, in that it’s composed of a sequence of characters and commonly used
to name or label things. Where a string is used, a symbol can often be substituted, and vice-
versa. A symbol is written in one of two ways: by preceding the sequence of characters with a
backslash (e.g., \freq), or by enclosing it in single quotes (e.g., 'freq'). These styles are largely
interchangeable, but the quote enclosure is the safer option of the two. Symbols that begin
with or include certain characters will trigger syntax errors if using the backslash style. Unlike
a string, a symbol is an irreducible unit; it is not possible to access or manipulate the indi-
vidual characters in a symbol, and all symbols return zero in response to the size method. It
is, however, possible to convert back and forth between symbols and strings using the methods
asSymbol and asString (see Code Example 1.11). Symbols, being slightly more optimized
than strings, are the preferable choice when used as names or labels for objects.
1.6.4 BOOLEANS
There are exactly two instances of the Boolean class: true and false. These values are special
keywords, meaning we can’t use either as a variable name. If you type one of these keywords
into the code editor, you’ll notice its text will change color to indicate its significance. Booleans
play a significant role in conditional logic, explored later in this chapter.
Throughout SC, there are methods and binary operators that represent true-or-false
questions, and which return a Boolean value that represents the answer. These include less-than/
greater-than operations, and various methods that begin with the word “is” (e.g., isInteger,
isEmpty), which check whether some condition is met. Some examples appear in Code
Example 1.12, and a list of common binary operators that return Booleans appears in Table 1.4.
1.6.5 NIL
Like true and false, nil is a reserved keyword in the SC language, and is the singular
instance of the Nil class. Most commonly, it represents the value of a variable that hasn’t
been given a value assignment, or something that doesn’t exist. We rarely use nil ex-
plicitly, but it shows up frequently, so it’s helpful to be familiar. The isNil method,
demonstrated in Code Example 1.13, can be useful for confirming whether a variable has
a value (attempting to call methods on an uninitialized variable is a common source of
error messages).
(
var num;
num.isNil.postln; // check the variable — initially, it's nil
num = 2; // make an assignment
num.isNil.postln; // check again — it's no longer nil
)
1.6.6 ARRAYS
An array is an ordered collection of objects. Syntactically, objects stored in an array are
separated by commas and surrounded by an enclosure of square brackets. Arrays are like
strings in that both are ordered lists, but while strings can only contain text characters, an
array can contain anything. In fact, arrays can (and often do) contain other arrays. Arrays are
among the most frequently used objects, because they allow us to express an arbitrarily large
collection as a singular unit. Arrays have lots of musical applications; we might use one to con-
tain pitch information for a musical scale, a sequence of rhythmic values, and so on.
As shown in Code Example 1.14, we can access an item stored in an array by using the at
method and providing the numerical index. Indices begin at zero. As an alternative, we can
follow an array with a square bracket enclosure containing the desired index.
C h a p t er 1: C o r e Pr o g r a mmi n g C o n c ep ts 23
Most unary and binary operators defined for numbers can also be applied to arrays, if they
contain numbers. Several examples appear in Code Example 1.15. If we apply a binary oper-
ator to a number and an array, the operation is applied to the number and each item in the
array, and the new array is returned. A binary operation between two arrays of the same size
returns a new array of the same size in which the binary operation has been applied to each
pair of items. If the arrays are different sizes, the operation is applied to corresponding pairs
of items, but the smaller array will repeat itself as many times as needed to accommodate the
larger array (this behavior is called “wrapping”).
The dup method, defined for all objects, returns an array of copies of its receiver. An integer,
provided as an argument, determines the size of the array. The exclamation mark can also be
used as a symbolic shortcut (see Code Example 1.16).
Arrays are a must-learn feature, rich with many convenient methods and uses—too many to
squeeze into this section. Companion Code 1.1 provides an array “cheat sheet,” covering many
of these uses.
1.6.7 FUNCTIONS
We’ll inevitably write some small program that performs a useful task, like creating chords
from pitches or converting metric values to durations. Whatever this code does, we certainly
don’t want to be burdened with copying and pasting it all over our file—or worse, typing it
out multiple times—because this consumes time and space. Functions address this problem by
encapsulating some code and allowing us to reuse it easily and concisely. Functions are espe-
cially valuable in that they provide an option for modularizing code, that is, expressing a large
program as smaller, independent code units, rather than having to deal with one big messy file.
A modular program is generally easier to read, understand, and debug.
A function is delineated by an enclosure of curly braces. If you type the function that
appears in Code Example 1.17 from top to bottom, you’ll notice the IDE will automatically
indent the enclosed code to improve readability. If you write your code in an unusual order,
the auto-indent feature may not work as expected, but you can always highlight your code and
press the [tab] key to invoke the auto-indentation feature. Once a function is defined, we can
evaluate it with value, or by following it with a period and a parenthetical enclosure. When
evaluated, a function returns the value of the last expression it contains.
(
f = {
var num = 4;
num = num.squared;
num = num.reciprocal;
};
)
The function in Code Example 1.17 is not particularly useful, because it produces the same
result every time. Usually, we want a function whose output depends on some user-provided
input. For instance, suppose we want our function to square and take the reciprocal of an ar-
bitrary value.
We’ve already seen similar behavior with pow (which is, itself, a sort of function). We
provide the exponent as an argument, which influences the returned value. So, in our func-
tion, we can declare an argument of our own using an arg statement. Like a variable, an
argument is a named container that holds a value, but which also serves as an input to a
C h a p t er 1: C o r e Pr o g r a mmi n g C o n c ep ts 25
(
f = {
arg input = 4;
var num;
num = input.squared;
num = num.reciprocal;
};
)
Code Example 1.19 shows a syntax alternative that replaces the arg keyword with an enclo-
sure of vertical bar characters (sometimes called “pipes”) and declares multiple arguments,
converting the code from Code Example 1.4 into a function. When executing a function
with multiple arguments, the argument values must be separated by commas, and will be
interpreted in the same order as they appear in the declaration.
(
g = { |thingA = 7, thingB = 5|
var result;
thingA = thingA.squared;
thingB = thingB.reciprocal;
result = thingA + thingB;
};
)
1.6.9 LITERALS
In Section 1.3.3, we compared a house to a blueprint of a house to better understand the con-
ceptual difference between classes and instances. If this example were translated into SC code,
it might look something like this:
In this imaginary example, we make a new house, paint it blue, and add a garage. This general
workflow—creating a new thing, storing it in a variable, and interacting with it through in-
stance methods—is quite common. In fact, most of the objects we’ll introduce from this point
forward will involve a workflow that follows this basic pattern. We won’t necessarily use new
in every case (some classes expect different creation methods), but the general idea is the same.
Interestingly, we haven’t been using this workflow for the classes introduced in this sec-
tion. For example, we don’t have to type Float.new(5.2) or Symbol.new(\freq). Instead,
we just type the object as is. Classes like these, which have a direct, syntactical representation
through code, are called literals. When we type the number seven, it is literally the number
seven. But an object like a house can’t be directly represented with code; there is no “house”
symbol in our standard character set. So, we must use the more abstract approach of typing
x = House.new, while its literal representation remains in our imagination.
Integers, floats, strings, symbols, Booleans, and functions are all examples of literals.
Arrays, for the record, exist in more of a grey area; they have a direct representation via square
brackets, but we may sometimes create one with Array.new or a related method. There is
also a distinction between literal arrays and non-literal arrays, but which is not relevant to our
current discussion. The point is that many of the objects we’ll encounter in this book are not
literals and require creation via some method call to their class.
1.7 Randomness
Programming languages are quite good at generating randomness. Well, not exactly—
programming languages are good at producing behavior that humans find convincingly
random. Most random algorithms are pseudo-random; they begin with a “seed” value, which
fuels a deterministic supply of numbers that feel acceptably random. Regardless of how they’re
generated, random values can be useful in musical applications. We can, for example, spawn
random melodies from a scale, shuffle metric values to create rhythmic variations, or ran-
domize the stereophonic position of a sound.
approximates an exponential distribution, which means that on average, the output values will
tend toward the minimum. This method always returns a float, and the minimum and max-
imum value must have the same sign (both positive or both negative, and neither can be zero).
These methods are demonstrated in Code Example 1.20. This book favors the “method(receiver)”
syntax for these methods because it highlights their ranged nature more clearly.
Which of these two methods should you use? It depends on what the output will be used for.
A dice roll or coin flip, in which all outcomes are equally probable, can be accurately simu-
lated with rrand. However, our ears perceive certain musical parameters nonlinearly, such as
frequency and amplitude. Therefore, using a uniform distribution to generate values for these
parameters may produce a result that sounds unbalanced. Consider, for example, generating a
random frequency between 20 and 20,000 Hz. If using rrand, then the output will fall in the
upper half of this range about half the time, (between approximately 10,000 and 20,000 Hz).
This may seem like a wide range, but from a musical perspective, it’s only one octave, and a
very high-pitched octave at that! The other half of this range spans approximately 9 octaves,
so the sonic result would appear saturated with high frequencies, while truly low frequencies
would be rare. By contrast, exprand would provide a more natural-sounding distribution. So,
don’t just pick one of these two methods at random (no pun intended)! Instead, think about
how the values will be used, and make an informed decision.
(
var scale, note;
scale = [0, 2, 4, 5, 7, 9, 11, 12];
note = scale.choose;
)
C h a p t er 1: C o r e Pr o g r a mmi n g C o n c ep ts 29
Suppose we have a bag of 1,000 marbles. 750 are red, 220 are green, and 30 are blue. How
would we simulate picking a random marble? We could create an array of 1,000 symbols and
use choose, but this is clumsy. Instead, we can use wchoose to simulate a weighted choice,
shown in Code Example 1.22. This method requires an array of weight values, which must
sum to one and be the same size as the collection we’re choosing from. To avoid doing the
math in our heads, we can use normalizeSum, which scales the weights so that they sum to
one, while keeping their relative proportions intact.
(
var bag, pick;
bag = [\red, \green, \blue];
pick = bag.wchoose([750, 220, 30].normalizeSum);
)
expressions of the form “if-then-else,” which tend to be relatively common. Other conditional
mechanisms are documented in a reference file called “Control Structures.”
1.8.1 IF
One of the most common conditional methods is if, which includes three components: (1)
an expression that represents the test condition, which must return a Boolean value, (2) a
function to be evaluated if the condition is true, and (3) an optional function to be evaluated
if false. Code Example 1.24 demonstrates the use of conditional logic to model a coin flip (a
value of 1 represents “heads”), in three styles that vary in syntax and whitespace. The second
style tends to be preferable to the first, because it places the “if” at the beginning of the expres-
sion, mirroring how the sentence it represents would be spoken in English. Because the entire
expression is somewhat long, the multi-line approach can improve readability. Note that in the
first expression, the parentheses around the test condition are required to give precedence to
the binary operator == over the if method. Without parentheses, the if method is applied to
the number 1 instead of the full Boolean expression, which produces an error.
// "receiver-dot-method" syntax:
([0, 1].choose == 1).if({\heads.postln}, {\tails.postln});
// "method(receiver)" syntax:
if([0, 1].choose == 1, {\heads.postln}, {\tails.postln});
1.8.2 AND/OR
The methods and and or (representable using binary operators && and ||), allow us to check
multiple conditions. For example, if ice cream is on sale, and they have chocolate, then I’ll buy
two. Code Example 1.25 models a two-coin flip in which both must be “heads” for the result
to be considered true. Again, parentheses around each conditional test are required to ensure
correct order of operations.
C h a p t er 1: C o r e Pr o g r a mmi n g C o n c ep ts 31
(
if(
([0, 1].choose == 1) && ([0, 1].choose == 1),
{"both heads".postln},
{"at least one tails".postln}
);
)
(
var roll = rrand(1, 6);
case(
{roll == 1}, {\red.postln},
{roll == 2}, {\orange.postln},
{roll == 3}, {\yellow.postln},
{roll == 4}, {\green.postln},
{roll == 5}, {\blue.postln},
{roll == 6}, {\purple.postln}
);
)
A switch statement is similar to case, but with a slightly different syntax, shown in Code
Example 1.27. We begin with some value—not necessarily a Boolean—and provide an ar-
bitrary number of value-function pairs. The interpreter will check for equality between the
32 SuperCollider for the Creative Musician
starting value and each of the paired values. For the first comparison that returns true, the
corresponding function is evaluated.
(
var roll = rrand(1, 6);
switch(
roll,
1, {\red.postln},
2, {\orange.postln},
3, {\yellow.postln},
4, {\green.postln},
5, {\blue.postln},
6, {\purple.postln}
);
)
1.9 Iteration
One of the most attractive aspects of computer programming is its ability to handle repetitive
tasks. Iteration refers to techniques that allow a repetitive task to be expressed and executed
concisely. Music is full of repetitive structures and benefits greatly from iteration. More gen-
erally, if you ever find yourself typing a nearly identical chunk of code many times over, or
relying heavily on copy/paste, this could be a sign that you should be using iteration.
Two general-purpose iteration methods, do and collect, often make good choices for iter-
ative tasks. Both are applied to some collection—usually an array—and both accept a function
as their sole argument. The function is evaluated once for each item in the collection. A primary
difference between these two methods is that do returns its receiver, while collect returns a
modified collection of the same size, populated using values returned by the function. Thus,
do is a good choice when we don’t care about the values returned by the function, and instead
simply want to “do” some action a certain number of times. On the other hand, collect is a
good choice when we want to modify or interact with an existing collection and capture the re-
sult. At the beginning of an iteration function, we can optionally declare two arguments, which
represent each item in the collection and its index as the function is repeatedly executed. By
declaring these arguments, we give ourselves access to the collection items within the function.
In Code Example 1.28(a), we iterate over an array of four items, and for each item, we
post a string. In this case, the items in the array are irrelevant; the result will be the same as
long as the size of the array is four. Performing an action some number of times is so common,
that do is also defined for integers. When do is applied to some integer n, the receiver will
be interpreted as the array [0, 1, ... n-1], thus providing a shorter alternative, depicted in
Code Example 1.28(b). In Code Example 1.28(c), we declare two arguments and post them,
to visualize the values of these arguments.
C h a p t er 1: C o r e Pr o g r a mmi n g C o n c ep ts 33
A simple usage of collect appears in Code Example 1.29. We iterate over the array, and for
each item, return the item multiplied by its index. collect returns this new array.
Numerous other iteration methods exist, several of which are depicted in Code Example 1.30.
The isPrime method is featured here, which returns true if its receiver is a prime number,
and otherwise returns false.
// return the first item for which the function returns true:
x.detect({ |n| n.isPrime }); // -> 101
// return true if the function returns true for at least one item:
x.any({ |n| n.isPrime }); // -> true
// return the number of items for which the function returns true:
x.count({ |n| n.isPrime }); // -> 3
34 SuperCollider for the Creative Musician
1.10 Summary
The topics presented in this chapter provide foundational support throughout the rest of this
book, and hopefully throughout your own explorations with SC. Due to the fundamental
nature of these concepts, some of the code examples in this chapter may feel a bit dry, ab-
stract, or difficult to fully digest. If you don’t fully understand how and why these techniques
might apply in a practical setting, that’s reasonable. As you continue reading, these concepts
will arise again and again, reflected through applications in synthesis, sampling, sequencing,
and composition. For now, to provide some practical context, Companion Code 1.2 explores
one possible application that ties several of these concepts together: creating a function that
converts a pitch letter name and octave to the corresponding MIDI note number.
Notes
1 Matthew J. Wright and Adrian Freed, “Open SoundControl: A New Protocol for Communicating
with Sound Synthesizers,” in Proceedings of the 1997 International Computer Music Conference,
Proceedings of the International Computer Music Association (San Francisco: International
Computer Music Association, 1997). More information about OSC can be found at https://opens
oundcontrol.stanford.edu/.
2 Technically, “method” and “message” are not synonymous, though they are closely related. A
message is a more abstract concept of requesting an operation to be performed on an object. A
method is a more concrete concept that encompasses the name and description of how a message
should be implemented for an object. In a practical context, it is within reason to use these terms
interchangeably.
3 At the time of writing, the documentation files are also available on the web at https://doc.sccode.org/.
CHAPTER 2
2.1 Overview
In this chapter, we’ll move beyond the basics of the SC language and begin focusing on sound-
specific features. Primarily, we’ll introduce core classes and methods explicitly designed to
support audio, as well as some auxiliary methods and tools that enhance the creative workflow.
s.boot;
By default, the keyboard shortcut [cmd]+[b]will also boot the server. As you run this line, in-
formation will appear in the post window. If the boot is successful, the numbers in the server
status bar in the bottom-right corner of the IDE will turn green, and the post window should
look like the image in Figure 2.1. If the server numbers don’t turn green, the server has not
successfully booted. Boot failures are relatively uncommon, but when they do occur, they
are rarely cause for alarm and almost always quickly rectifiable. For example, if you’re using
separate hardware devices for audio input/output, the server will not boot if these devices
are running at different sample rates. Alternatively, if a running server is unexpectedly in-
terrupted (e.g., if the USB cable for your audio interface becomes unplugged), an attempt to
reboot may produce an error that reads “Exception in World_OpenUDP: unable to bind udp
socket,” or “ERROR: server failed to start.” This message appears because there is likely a
hanging instance of the audio server application that must be destroyed, which can be done
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.003.0002
36 SuperCollider for the Creative Musician
by evaluating Server.killAll before rebooting. In rarer cases, a boot failure may be resolved
by recompiling the SC class library, quitting and reopening the SC environment, or—as a last
resort—restarting your computer.
FIGURE 2.1 The post window and status bar after the server has successfully booted.
We know that the lowercase letters a through z are available as interpreter variables, but
we didn’t store anything in s. So, how does s.boot work? When we launch the SC environ-
ment, a reference to the server is automatically assigned to this variable. Evaluate it, and you’ll
see that the interpreter returns “localhost,” the default name of your local server application.
This assignment is a convenience to the user, providing access to the server through a single
character. If you accidentally overwrite this variable, you can remake the assignment manu-
ally, by evaluating:
s = Server.local;
Server.local.boot;
Or, you can click the server numbers in the status bar to access a pop-up menu, from
which the server can be booted. You can quit the server from this same pop-up menu, or by
evaluating one of the following two expressions:
s.quit;
Server.local.quit;
C h a p t er 2: Ess en t i a ls o f M a k i n g S o u n d 37
Some examples in this book omit s.boot and assume your server is already running. If you
attempt to work with sound while the server is offline, the post window will display a friendly
warning that the server is not running.
Category UGens
Musical fluency doesn’t demand intimate familiarity with every UGen. UGens are a
means to an end, so you’ll only need to get acquainted with those that help achieve your goals.
This book focuses on a core set of UGens that are broadly applicable in synthesis, sampling,
and signal processing.
measurement, producing a signal whose frequency is lower than the Nyquist frequency. This
phenomenon is called aliasing, or foldover, depicted in Figure 2.2.
FIGURE 2.2 A simplified diagram of aliasing, including (a) a continuous periodic signal whose
frequency exceeds the Nyquist frequency, (b) the result of sampling this periodic signal, and (c) the
erroneous attempt to reconstruct the original signal using sample data. Dashed lines indicate
moments when samples are taken.
Digital audio is processed in blocks of samples, rather than one-by-one. Block processing
is less taxing in terms of processing power but introduces a small amount of latency (an entire
block must be computed before it can be rendered). The size of a block varies but is almost
always a power of two. As block size increases, CPU load decreases and latency increases, and
vice-versa. Like samples and sample rates, the concept of sample blocks is not unique to SC; it
is ubiquitous throughout the digital audio realm. In SC, a block of samples is usually called a
“control block” or “control cycle.”
Assuming you’ve already booted the server, you can check the sample rate and block size
by evaluating the following expressions:
s.sampleRate;
s.options.blockSize;
An audio rate UGen produces output values at the sample rate, which is the highest-
resolution signal available to us. If you want to hear a signal through your speakers, it must
run at the audio rate. Generally, if you require a high frequency signal, a fast-moving signal,
or anything you want to monitor directly, you should use ar.
Control rate UGens run at the control rate. This rate is equal to sample rate divided
by block size. If the sample rate is 48,000 and the block size is 64, then the control rate
calculates 48,000 ÷ 64 = 750 samples per second, outputting one at the start of each con-
trol cycle. Control rate UGens have a lower resolution but consume less processing power.
They are useful when you need a relatively slow-moving signal, like a gradual envelope or a
low-frequency oscillator. Because the control rate is a proportionally reduced sample rate, the
Nyquist frequency is similarly reduced. In this case, the highest control rate frequency that
can be faithfully represented is 750 ÷ 2 = 375 Hz. Therefore, if you require an oscillator with
a frequency higher than 375 Hz, kr is a poor choice. In fact, signal integrity degrades as the
frequency of a signal approaches the Nyquist frequency, so ar is a good choice even if a UGen’s
frequency is slightly below the Nyquist frequency.
Consider a sine oscillator with a frequency of 1 Hz, used to control the cutoff frequency
of a filter. It’s possible to run this oscillator at the audio rate, but this would provide more res-
olution than we need. If using the control rate instead, we’d calculate 750 samples per cycle,
which is more than enough to visually represent one cycle of a sine wave. In fact, as few as
20 or 30 points would probably be enough to “see” a sinusoidal shape. This is an example in
which we can take advantage of control rate UGens to reduce the server’s processing load,
without sacrificing sound quality. Figure 2.3 depicts a 120 Hz sine oscillator running at the
audio rate and the control rate. If you’re unsure of which rate to use in a particular situation,
ar is usually the safer choice.
FIGURE 2.3 A few centiseconds of a 120 Hz sine wave running at the audio rate and control rate.
Initialization rate UGens are the rarest of the three. In fact, many UGens won’t under-
stand the ir method. A UGen at the initialization rate produces exactly one sample when
40 SuperCollider for the Creative Musician
created and holds that value indefinitely. So, the initialization rate is not really a rate at all; it’s
simply a means of initializing a UGen so that it behaves like a constant. The CPU savings ir
provides are obvious, but what’s the point of a signal that’s stuck on a specific value? There are
a few situations in which this choice makes sense. One example is the SampleRate UGen, a
UGen that outputs the sample rate of the server. In some signal algorithms, we need to per-
form a calculation involving the sample rate, which may vary from one system to another.
Generally, the sample rate does not (and should not) change while the server is booted, so it’s
inefficient and unnecessary to repeatedly calculate it.
FIGURE 2.4 A screenshot of the class methods section of the SinOsc help file.
; W ORKING WITH pi
TIP.RAND()
In the case of SinOsc, one cycle is equal to 2π radians. In SC, we use the special key-
word pi to represent π. Expressions such as pi/4, 3pi/2, etc., are valid.
C h a p t er 2: Ess en t i a ls o f M a k i n g S o u n d 41
freq and phase are relatively specific to SinOsc. Most oscillator UGens have a frequency
argument, but not all have a phase argument. Those that do might measure it using a dif-
ferent scale. The triangle wave oscillator LFTri, for example, expects a phase value between
0 and 4. The pulse wave generator Pulse has no phase input but has a width argument that
determines its duty cycle (the percentage of each cycle that is “high” vs. “low”). Further still, a
noise generator like PinkNoise has neither a frequency nor a phase argument.
By contrast, mul and add are nearly ubiquitous throughout the UGen family, and almost
always occur as the last two arguments. Most UGens are either bipolar or unipolar; the de-
fault output range of a bipolar UGen is -1 to +1, while the default range of a unipolar UGen is
bounded between 0 and +1. These ranges are considered “nominal,” and register at or near 0
decibels on a digital signal meter. This is the highest level that can be represented in a digital
system, so you should envision a nominal signal as being quite loud!
By virtue of its multiplicative behavior, mul corresponds to signal amplitude. The default
mul value is one, which has no effect. A mul value of 0.5, for example, reduces amplitude by
half. Specifying a mul value greater than one will push signal amplitude past representational
limits, and distortion may occur. When directly monitoring the output of a UGen, a mul
value around 0.05 or 0.1 is a good “ballpark” range, which is sufficiently loud without being
unpleasant, and which provides ample headroom.
{PinkNoise.ar(mul: 1) ! 2}.play;
As the noise plays, slowly turn up your system volume until the noise sounds
strong and healthy. It shouldn’t be painful, but it should be unambiguously loud, per-
haps even slightly annoying. Once you’ve set this volume level, consider your system
“calibrated” and don’t modify your hardware levels again. This configuration will
encourage you to create signal levels in SC that are comfortable and present a min-
imal risk of distorting.
By contrast, a bad workflow involves setting your system volume too low, which
encourages compensation with higher mul values. This configuration gives the
misleading impression that your signal levels are low, when they’re actually quite
high, with almost no headroom. In this setup, you’ll inevitably find yourself in the
frustrating situation of having signals that seem too quiet, but with levels that are
constantly “in the red.”
add defaults to zero, in which case it (like mul) has no effect on a UGen’s output signal.
When playing a UGen through your speakers, it’s generally not useful or recommended to
specify a non-zero add value. If add is non-zero, the entire waveform vertically shifts, so that
its point of “equilibrium” no longer coincides with true zero. A vertically shifted signal may
42 SuperCollider for the Creative Musician
not sound any different to the human ear, but if routed through a loudspeaker, the speaker
cone will vibrate asymmetrically, which may fatigue or stress the loudspeaker over long periods
of time, possibly degrading its integrity and lifespan.
These “rules” for mul and add apply mainly to situations in which we’re directly listening
to a UGen. There are other cases, such as modulating one signal with another, where it’s de-
sirable to specify custom mul/add values.
TIP.RAND(); O
RDER OF OPERATIONS WITH
mul/add
When specifying mul/add values, keep in mind that these operations always occur in
a specific order: The multiplication happens first, then the addition.
The main takeaway from this discussion is that each UGen is unique, not only in its purpose,
but also in terms of the input arguments it accepts, their default values, and their expected
ranges. As you start working with sound, don’t guess or make assumptions! Take the time to
read the help files and make sure the values you provide are consistent with what the UGen
expects. Few things in this world are quite as startling as accidentally supplying a frequency
value for an amplitude argument! For additional context, you may find it helpful to read the
first few paragraphs of the “UGen” class help file, as well as a guide file titled “Unit Generators
and Synths.”
Press [cmd]+[period] to stop the sound. Take a moment to memorize this keyboard shortcut!
Incorporate it into your muscle memory. Carve it into your bedroom walls. Dream about it at
night. It’s your trusty panic button.
C h a p t er 2: Ess en t i a ls o f M a k i n g S o u n d 43
Let’s unpack the expression in Code Example 2.1 and examine some variations. First, let’s
acknowledge that listening to sound in only one speaker can be uncomfortable, particularly
on headphones. SC interprets an array of UGens as a multichannel signal. So, if the function
contains an array of two SinOsc UGens, the first signal will be routed to the left speaker, and
the second to the right. The duplication shortcut, shown in Code Example 2.2, is a quick way
to create such an array. Multichannel signals will be explored later in this chapter.
We know that SinOsc expects four arguments, and we’ve provided four values. The fre-
quency is 300, the phase is zero, and so on. When provided this way, we’re responsible for
the correct order. SC isn’t clever enough to determine which is which if we mix things up.
An experienced user may remember what these numbers mean, but for a newcomer, this
minimal style may involve a lot of back-and-forth between the code and the help files. Code
Example 2.3 depicts an alternative, using a more verbose style:
This longer but more descriptive approach of specifying argument values applies to all
methods, not just those associated with UGens. Companion Code 2.1 details some variations
and rules to be aware of when using this keyword style.
When calling play, we must assign the resulting sound process to a variable, so that we can
communicate with it later. While the sound is playing, we can alter it using the set method
and providing the name and value of the parameter we want to change. Code Example 2.4
shows this sequence of actions.
(
x = { |freq = 300|
SinOsc.ar(freq, mul: 0.1) ! 2;
}.play;
)
We can declare as many arguments as we need. Code Example 2.5 adds a second argument
that controls signal amplitude and demonstrates a variety of set messages.
(
x = { |freq = 300, amp = 0.1|
SinOsc.ar(freq, mul: amp) ! 2;
}.play;
)
It’s often desirable to separate creating and playing a UGen function into two discrete actions.
In Code Example 2.6, we define a UGen function and store it in the interpreter variable f.
Then, we play it, storing the resulting sound process in the interpreter variable x. The former
is simply the object that defines the sound, while the latter is the active sound process that
understands set messages. It’s important not to confuse the two.
C h a p t er 2: Ess en t i a ls o f M a k i n g S o u n d 45
(
// define the sound
f = { |freq = 300, amp = 0.1|
SinOsc.ar(freq, mul: amp) ! 2;
};
)
Even if arguments in a UGen function have default values, we can override them when playing
the function. The play method has an args argument, which accepts an array of name-value
pairs, shown in Code Example 2.7.
(
f = { |freq = 300, amp = 0.1|
SinOsc.ar(freq, mul: amp) ! 2;
};
)
(
f = { |freq = 300, amp = 0.1|
SinOsc.ar(freq, mul: amp) ! 2;
};
)
y.free;
x.free;
Like [cmd]+[period], freeing a sound process also results in a hard stop, which may not be
what we want. When using function-dot-play, we can also use release to create a gradual
fade, optionally providing a fade duration measured in seconds (see Code Example 2.9).
(
f = { |freq = 300, amp = 0.1|
SinOsc.ar(freq, mul: amp) ! 2;
};
)
x = f.play;
x.release(2);
result is a new waveform in which both signals can usually be perceived. Code Example 2.10
and Figure 2.5 depict the summation of a sine wave with pink noise.
(
x = {
var sig;
sig = SinOsc.ar(300, mul: 0.15);
sig = sig + PinkNoise.ar(mul: 0.1);
sig = sig ! 2;
}.play;
)
x.release(2);
FIGURE 2.5 A graphical depiction of summing a sine wave with pink noise.
When a binary operator is used between a number and a UGen, the operation is applied
to the number and every sample value produced by the UGen. This being the case, multipli-
cation and addition can be used instead of providing argument values for mul and add. Code
Example 2.11 rewrites the code from Code Example 2.10 using this approach.
48 SuperCollider for the Creative Musician
(
x = {
var sig;
sig = SinOsc.ar(300) * 0.15;
sig = sig + (PinkNoise.ar * 0.1);
sig = sig ! 2;
}.play;
)
x.release(2);
TIP.RAND(); A
UGEN FUNCTION PLAYS THE LAST
EXPRESSION
Just as ordinary functions return the value of their last expression when evaluated, the
output signal from a UGen function is also determined by its last expression. In the
first of the following two functions, we’ll only hear pink noise, despite creating a sine
wave. In the second function, the last expression is the sum of both signals, which is
what we hear.
(
{
var sig0, sig1;
sig0 = SinOsc.ar(300, mul: 0.15) ! 2;
sig1 = PinkNoise.ar(mul: 0.1) ! 2;
}.play;
)
(
{
var sig0, sig1;
sig0 = SinOsc.ar(300, mul: 0.15) ! 2;
sig1 = PinkNoise.ar(mul: 0.1) ! 2;
sig0 + sig1;
}.play;
)
C h a p t er 2: Ess en t i a ls o f M a k i n g S o u n d 49
Multiplying one signal by another is also common. The code in Code Example 2.12
multiplies pink noise by a low-frequency sine wave, producing a sound like ocean waves.
This is a simple example of signal modulation, which involves the use of one signal to influ-
ence some aspect of another. A phase value of 3π/2 causes the sine oscillator to begin at the
lowest point in its cycle, and the multiplication/addition values scale and shift the output
values to a new range between 0 and 0.2. The effects of these argument values are visualized
in Figure 2.6.
(
x = {
var sig, lfo;
lfo = SinOsc.kr(freq: 1/5, phase: 3pi/2, mul: 0.1, add: 0.1);
sig = PinkNoise.ar * lfo;
sig = sig ! 2;
}.play;
)
x.release(2);
50 SuperCollider for the Creative Musician
FIGURE 2.6 An incremental visualization of how argument values influence the waveform of a
SinOsc UGen.
When we want a UGen’s output to range between some arbitrary minimum and maximum,
using mul/add sometimes involves cumbersome mental math. Even worse, the actual range of
the UGen isn’t immediately clear from looking at the code. A better approach involves using
one of several range-mapping methods, such as range, demonstrated in Code Example 2.13.
This method lets us explicitly provide a minimum and maximum, avoiding the need to deal
with mul/add. Table 2.2 lists some common range-mapping methods.
(
x = {
var sig, lfo;
lfo = SinOsc.kr(freq: 0.2, phase: 3pi/2).range(0, 0.2);
C h a p t er 2: Ess en t i a ls o f M a k i n g S o u n d 51
x.release(2);
Method Description
As you start exploring UGen functions of your own, remember that just because a math op-
eration can be used doesn’t necessarily mean it should. Dividing one signal by another, for
example, is dangerous! This calculation may involve division by some extremely small value
(or even zero), which is likely to generate a dramatic amplitude spike, or something similarly
unpleasant. Experimentation is encouraged, but you should proceed with purpose and mind-
fulness. Mute or turn your system volume down first before you try something unpredictable.
2.5 Envelopes
If we play a function containing some oscillator or noise generator, and then step away for a
coffee, we’d return sometime later to find that sound still going. In music, an infinite-length
sound isn’t particularly useful. Instead, we usually like sounds to have definitive beginnings
and ends, so that we can structure them in time.
52 SuperCollider for the Creative Musician
An envelope is a signal with a customizable shape and duration, typically constructed from
individual line segments joined end-to-end. Envelopes are often used to control the amplitude
of another signal, enabling fades instead of abrupt starts and stops. By using release, we’ve
already been relying on a built-in envelope that accompanies the function-dot-play construct.
When controlling signal amplitude, an envelope typically starts at zero, ramps up to some
positive value, possibly stays there for a while, and eventually comes back down to zero. The
first segment is called the “attack,” the stable portion in the middle is the “sustain,” and the
final descent is the “release.” Many variations exist; an “ADSR” envelope, for example, has a
“decay” segment between the attack and sustain (see Figure 2.7).
It’s important to recognize that the ADSR envelope is just one specific example that happens
to be useful for modeling envelope characteristics of many real-world sounds. Ultimately, an
envelope is just a signal with a customizable shape, which can be used to control any aspect of
a signal algorithm, not just amplitude.
(
{
var sig, env;
env = Line.kr(start: 0.3, end: 0, dur: 0.5);
sig = SinOsc.ar(350) * env;
sig = sig ! 2;
}.play;
)
(
{
var sig, env;
env = XLine.kr(start: 0.3, end: 0.0001, dur: 0.5);
sig = SinOsc.ar(350) * env;
sig = sig ! 2;
}.play;
)
FIGURE 2.8 Visual result of Line (top) and XLine (bottom) when used as amplitude envelopes for an
oscillator signal.
54 SuperCollider for the Creative Musician
2.5.2 DONEACTIONS
Line and XLine include an argument, named doneAction, which appears in UGens that
have an inherently finite duration. In SC, a doneAction represents an action that the audio
server takes when the UGen that contains the doneAction has finished. These actions can be
specified by integer, and a complete list of available actions and their meanings appears in the
help file for the Done UGen. Most of the descriptions in this table may look completely mean-
ingless to you, but if so, don’t worry. In practice, we rarely employ a doneAction other than
zero (do nothing) or two (free the enclosing synth). The default doneAction is zero, and while
taking no action sounds harmless, it carries consequences. To demonstrate, evaluate either of
the two code examples in Code Example 2.14 many times in a row. As you do, you’ll notice
your CPU usage will gradually creep upwards.
Why does this happen? Because a doneAction of zero tells the server to do nothing when
the envelope is complete, the envelope remains active on the server and continues to output
its final value indefinitely. These zero or near-zero values are multiplied by the sine oscillator,
which results in a silent or near-silent signal. The server is indifferent to whether a sound pro-
cess is silent; it only knows that it was instructed to do nothing when the envelope finished.
If you evaluate this code over and over, you’ll create more and more non-terminating sound
processes. Eventually, the server will become overwhelmed, and additional sounds will start
glitching (if you’ve followed these instructions and ramped up your CPU numbers, now is a
good time to press [cmd]+[period] to remove these “ghost” sounds).
From a practical perspective, when our envelope reaches its end, we consider the sound to
be totally finished. So, it makes sense to specify 2 for the doneAction. When running the code
in Code Example 2.15, the server automatically frees the sound when the envelope is done.
Evaluate this code as many times as you like, and although it won’t sound any different, you’ll
notice that CPU usage will not creep upwards as it did before.
(
{
var sig, env;
env = XLine.kr(start: 0.3, end: 0.0001, dur: 0.5, doneAction: 2);
sig = SinOsc.ar(350) * env;
sig = sig ! 2;
}.play;
)
Knowing which doneAction to specify is an important skill, essential for automating the
cleanup of stale sounds and optimizing usage of the audio server’s resources.
C h a p t er 2: Ess en t i a ls o f M a k i n g S o u n d 55
(
e = Env.new(
levels: [0, 1, 0],
times: [1, 3],
curve: [0, 0]
);
e.plot;
)
Let’s unpack the meaning of the numbers in Code Example 2.16. The first array contains
envelope levels, which are values that the envelope signal will visit as time progresses: the en-
velope starts at 0, travels to 1, and returns to 0. The second array specifies durations of the
56 SuperCollider for the Creative Musician
segments between these levels: the attack is 1 second long, and the release is 3 seconds. The
final array determines segment curvatures. Zero represents linearity, while positive/negative
values will “bend” the segments. Note that when an Env is created this way, the size of the
first array is always one greater than either of the other two arrays. Take a moment to modify
the code from Code Example 2.16, to better understand how the numbers influence the
envelope’s shape. Code Example 2.17 illustrates how EnvGen and Env work together to create
an envelope signal in a UGen function. Keywords are used for clarity. Because it is inherently
finite, EnvGen accepts a doneAction. As before, it makes sense to specify a doneAction of 2 to
automate the cleanup process.
(
{
var sig, env;
env = EnvGen.kr(
envelope: Env.new(
levels: [0, 1, 0],
times: [1, 3],
curve: [0, 0]
),
doneAction: 2
);
sig = SinOsc.ar(350) * 0.3;
sig = sig * env;
sig = sig ! 2;
}.play;
)
C h a p t er 2: Ess en t i a ls o f M a k i n g S o u n d 57
Envelopes can be divided into two categories: those with fixed durations, and those that can
be sustained indefinitely. The envelopes we’ve seen so far belong to the first category, but in
the real world, many musical sounds have amplitude envelopes with indefinite durations.
When a violinist bows a string, we won’t know when the sound will stop until the bow is
lifted. An envelope that models this behavior is called a gated envelope. It has a parameter,
called a “gate,” which determines how and when the envelope signal progresses along its tra-
jectory. When the gate value transitions from zero to positive, the envelope begins and sustains
at a point along the way. When the gate becomes zero again, the envelope continues from its
sustain point and finishes the rest of its journey. Like a real-world gate, we describe this pa-
rameter as being open (positive) or closed (zero).
To create a sustaining envelope, we can add a fourth argument to Env.new(): an integer
representing an index into the levels array, indicating the value at which the envelope will
sustain. In SC terminology, this level is called the “release node.” In Code Example 2.18, the
release node is 2, which means the envelope signal will sustain at a level of 0.2 while the gate
remains open. Because gate is a parameter we’d like to change while the sound is playing, it
must be declared as an argument, and supplied to the EnvGen. In effect, this example creates
an ADSR envelope, similar to Figure 2.7: the attack travels from 0 to 1 over 0.02 seconds, the
decay drops to a level of 0.2 over the next 0.3 seconds, and the signal remains at 0.2 until the
gate closes, which triggers a one-second release.
(
f = { |gate = 1|
var sig, env;
env = EnvGen.kr(
envelope: Env.new(
[0, 1, 0.2, 0],
[0.02, 0.3, 1],
[0, -1, -4],
2
),
gate: gate,
doneAction: 2
);
sig = SinOsc.ar(350) * 0.3;
sig = sig * env;
sig = sig ! 2;
};
)
x = f.play;
x.set(\gate, 0);
58 SuperCollider for the Creative Musician
In some cases, we may want to retrigger an envelope, opening and closing its gate at will, to
selectively allow sound to pass through. If so, a doneAction of 2 is a poor choice, because we
don’t necessarily want the sound process to be destroyed if the envelope reaches its end. Instead,
a 0 doneAction (the default) is the correct choice, demonstrated in Code Example 2.19, which
causes the envelope to “idle” at its end point until it is retriggered.
It’s worth being extra clear about the specific behavior of an envelope in response to gate
changes when a release node has been specified:2
• A zero-to-positive gate transition causes the envelope to move from its current level to the
second level in the levels array, using its first duration and first curve value. Note that the
envelope never revisits its first level, which is only used for initialization.
• A positive-to-zero gate transition causes the envelope to move from its current value to the
value immediately after the release node, using the duration and curve values at the same
index as the release node.
(
f = { |gate = 1|
var sig, env;
env = EnvGen.kr(
Env.new(
[0, 1, 0.2, 0],
[0.02, 0.3, 1],
[0, -1, -4],
2
),
gate
);
sig = SinOsc.ar(350) * 0.3;
sig = sig * env;
sig = sig ! 2;
};
)
x = f.play;
This retriggering ability may also be useful for fixed-duration envelopes. It’s possible but clumsy
to retrigger a fixed-duration envelope with a standard gate argument, because it requires man-
ually closing the gate before reopening. As a solution, we can precede the gate argument name
with t_, which transforms it into a “trigger-type” argument, which responds differently to set
messages. When a trigger-type argument is set to a non-zero value, it holds that value for a
single control cycle, and then almost immediately “snaps” back to zero. It’s like a real-world gate
that’s been augmented with a powerful spring, slamming shut immediately after being opened.
Code Example 2.20 depicts the use of trigger-type arguments. Note that the default gate value
is zero, which means the envelope will idle at its starting level (zero) until the gate is opened.
(
x = { |t_gate = 0|
var sig, env;
env = EnvGen.kr(
Env.new(
[0, 1, 0],
[0.02, 0.3],
[0, -4],
),
t_gate,
);
sig = SinOsc.ar(350) * 0.3;
sig = sig * env;
sig = sig ! 2;
}.play;
)
EnvGen, in partnership with Env, is one of the more complex UGens in the class li-
brary, with many variations and subtleties. Both help files contain additional information, and
Companion Code 2.2 explores additional uses and features of envelopes.
body, a propagating sound wave reaches one ear slightly before it reaches the other, creating a
binaural time delay that is interpreted by the brain to create a sense of spatial awareness. Sound
localization is not solely determined by this time delay (spectrum and amplitude discrepancies
between the ears provide additional information), but it is a significant factor.
Music would be much less interesting without the ability to simulate the illusion of
physical space. In the process of mixing a recording of an ensemble, it’s desirable to “place”
instruments at specific locations, giving the listener the impression of being physically present
at a live performance. Similarly, in electronic music, creative spatial choices can have exciting,
immersive effects. The process of manipulating a signal to change its perceived spatial location
is usually called “panning.”
The illusion of space is achieved using multichannel audio signals, reproduced through
multiple loudspeakers. In a stereophonic format, we start with two signals, which were perhaps
recorded by two microphones positioned at slightly different points in space. On reproduction,
the two signals are individually routed to left and right loudspeakers. If we are seated equidis-
tantly from the speakers, and neither too close nor too far from them, we experience the auditory
illusion of a “phantom image” that appears to emanate from the space between the speakers.
The stereophonic format is convenient because it approximates our natural listening ex-
perience reasonably well, and only requires two audio channels. But there’s no reason to stop
there! There are several standardized multichannel formats involving speakers that surround
the listener, as well as newer, emerging formats for encoding three-dimensional “sound fields,”
such as Ambisonic formats.3 Realistically, if you have n speakers, then you can use them to
reproduce an n-channel signal, and you can position these speakers however you like. Build
a tower of speakers, arrange them in a long line, hang them from the ceiling, mount them on
giant robotic arms that flail wildly. There are no rules!
SC can easily express and manipulate arbitrarily large multichannel signals. With a short
line of code and a reasonably powerful CPU, you can create a 500-channel signal in a matter
of seconds. Even if you don’t have 500 speakers to hear this signal in its true form, there are
UGens capable of mixing multichannel signals down to formats with fewer channels. The
bottom line is that the audio server interprets an array of UGens as a multichannel signal and
will attempt to route the individual channels to a contiguous block of output channels.
When applied to a UGen, the dup method (or its symbolic shortcut) returns an array of
UGen copies, interpreted as a multichannel signal. When we run the following line, level will
appear on both output meters.
FIGURE 2.10 One- and two-channel signals displayed on the server meters.
When the last line of a UGen function is an array of signals, SC attempts to route the
signal at index 0 to the lowest-numbered hardware output channel, and the item at index 1 to
the next-highest hardware output channel, and so forth, for every signal in the array. If you
are using your computer’s built-in sound card, output channels 0 and 1 correspond to your
built-in speakers or headphones. If you are using an external audio interface, these channels
correspond to the two lowest-numbered outputs on that interface. Some external interfaces
are capable of handling more than two simultaneous output channels. SC can (and should) be
configured so that its number of input/output channels matches your hardware setup. Input/
Output configuration is discussed more extensively in Chapter 6.
When we provide an array of numbers for a UGen argument, that UGen will transform
into an array of UGens, distributing each number to the corresponding UGen. This transfor-
mational behavior is called multichannel expansion and is one of SC’s most unique and pow-
erful features. Consider the chain of code statements in Code Example 2.21, which depicts
how multichannel expansion behaves on a step-by-step basis. In particular, note how the array
[350, 353] causes the UGen to “expand” into an array of UGens. The result is a 350 Hz tone
in the left speaker, and a 353 Hz tone on the right.
62 SuperCollider for the Creative Musician
When performing a binary operation involving two multichannel signals, as depicted in Code
Example 2.22, the operations are applied to corresponding pairs of channels. If one multi-
channel signal is larger than the other, the smaller signal will “wrap” its channels to accommo-
date the larger signal. These are the same behaviors we’ve already seen with arrays of numbers
in the previous chapter.
(
{
var sig, mod;
sig = SinOsc.ar([450, 800]);
mod = SinOsc.kr([1, 9]).range(0, 1);
sig = sig * mod;
sig = sig * 0.2;
}.play;
)
You may remember that we’ve seen a similar version of this behavior in the previous
chapter, when creating arrays of random numbers:
rrand(1, 9) ! 8; // eight copies of one random number
(
{
var sig;
sig = [SinOsc.ar(300), PinkNoise.ar];
sig = sig * 0.1;
sig = sig ! 2;
}.play;
)
In Code Example 2.24, we begin with a two-channel signal containing a sine wave in the
left channel and pink noise in the right. This signal is scaled down to produce a comfortable
monitoring level, and then erroneously duplicated, producing an array of arrays, written here
in a simplified pseudo-code style:
How should SC react to this? At the outermost layer, the audio server sees an array that
contains two items, and so it tries to play the 0th item in the left speaker, and the 1st item in
the right speaker. But, each of these items is an array, each containing two UGens. The result,
for better or worse, is that the server sums the UGens in each internal array, resulting in the
following (again written as pseudo-code):
So, we hear both UGens in both speakers, and our intended stereo separation of these two
signals is lost. The takeaway here is to be mindful of the channel size of your signals at every
step of a UGen function, and only apply multichannel operations when it is appropriate
to do so.
(
{
var sig, pan;
pan = SinOsc.kr(0.5) * 0.8;
sig = PinkNoise.ar * 0.2;
sig = Pan2.ar(sig, pan);
}.play;
)
Pan2 should always receive a one-channel input signal. If it receives a multichannel input
signal, the output signal will be an array of arrays, similar to the pitfall illustrated in the pre-
vious section.
What happens if we create a multichannel signal with more than two channels? In
Code Example 2.26, we create a 50-channel signal consisting of sine waves with random
frequencies. If we play this signal as-is, we’ll hear only the first two channels, because under
typical conditions there are only two channels that correspond to loudspeakers.
C h a p t er 2: Ess en t i a ls o f M a k i n g S o u n d 65
(
{
var sig, freq;
freq = {exprand(200, 2000)} ! 50;
sig = SinOsc.ar(freq) * 0.1;
}.play;
)
Splay, demonstrated in Code Example 2.27, is a UGen that mixes a multichannel signal
to a two-channel format in which the input channels are “placed” at equidistant points from
left-to-right. It’s a convenient option for hearing every channel of an arbitrarily large multi-
channel signal.
(
{
var sig, freq;
freq = {exprand(200, 2000)} ! 50;
sig = SinOsc.ar(freq) * 0.1;
sig = Splay.ar(sig);
}.play;
)
Exploring variations is left as an open exercise for the reader. It’s possible to create in-
tricate multichannel sounds with these simple principles, and we’ll encounter many such
sounds throughout this book. You may also want to read the guide file titled “Multichannel
Expansion” to refine your understanding.
we play a UGen function, the returned object is actually a Synth, which we’ve been referring
to as a “sound process” up to this point. As an analogy for using these two classes, consider
baking a cake. A SynthDef is like a recipe book, containing instructions for baking a variety
of cakes. A Synth is like a cake, created by following a recipe from the book. From this single
book, it’s possible to create lots of different cakes, but the book itself is not a cake (and not
nearly as delicious). If you wanted to make twenty cakes, you wouldn’t need twenty books. In
other words, two copies of the same book are redundant, but two identical cakes are not nec-
essarily redundant (maybe your guests are extra hungry). Like a good recipe book, a SynthDef
should be maximally flexible, so that it can be used to spawn a maximally diverse collection of
tangible results. If a SynthDef is poorly designed (i.e., with a limited set of recipes), the sounds
that can be produced will be similarly limited. Flexible SynthDef design primarily involves
declaring plenty of arguments, so that numerous UGen parameters can be customized and
controlled.
For short examples and quick tests, function-dot-play is convenient and saves time.
However, in cases where reuse and flexibility of signal algorithms are of greater importance,
it’s preferable to use SynthDef and Synth. In fact, using Synth and SynthDef is the only way
to create sound in SuperCollider. When we invoke function-dot-play, a SynthDef and Synth
are created for us in the background.
(
x = {
var sig;
sig = SinOsc.ar([350, 353]);
sig = sig * 0.2;
}.play;
)
x.free;
Consider the UGen function in Code Example 2.28. We can change it into a SynthDef using
a few simple steps, summarized here and detailed below:
A new SynthDef expects two arguments: a symbol, which represents the SynthDef’s name,
and a UGen function that defines its signal algorithm. Most algorithms, particularly those
that generate or process a signal, produce some output signal. When this is the case, we must
C h a p t er 2: Ess en t i a ls o f M a k i n g S o u n d 67
explicitly designate the output signal and its destination, by including an output UGen in the
function. There are several types of output UGens, but Out is quite common. The first argu-
ment of Out indicates the destination bus (0 corresponds to your lowest-numbered hardware
channel, usually your left speaker). The second argument is the signal to be sent to that bus.
If the output signal is a multichannel signal, additional channels will be routed to busses with
incrementally higher bus indices. For example, if Out is used to send a two-channel signal to
bus 0, the channels will be routed to busses 0 and 1. Finally, we close the SynthDef.new en-
closure, and use the add method to make the SynthDef available to the audio server. Code
Example 2.29 depicts the result of this process.
(
SynthDef(\test, {
var sig;
sig = SinOsc.ar([350, 353]);
sig = sig * 0.2;
Out.ar(0, sig);
}).add;
)
Once a SynthDef has been added, it will remain available as long as the SC environment re-
mains open and the server remains booted. If you want to make changes to the SynthDef, you
can re-evaluate the code, and the new definition will overwrite the old one. There can only be
one SynthDef with a specific name, so if you have multiple SynthDefs, be sure to give them
unique names.
x = Synth(\test);
x.free;
When we use function-dot-play, SC not only creates a SynthDef and Synth in the background
but also adds a few conveniences. If our function does not include a gated envelope, SC creates
68 SuperCollider for the Creative Musician
one automatically, with an argument named “gate.” The release method, when applied to a
Synth, assumes there is a “gate” argument used to control a sustaining envelope. Additionally,
SC adds an Out UGen if one does not already exist.
When using SynthDef and Synth ourselves, no such conveniences are applied. Instead, we
get exactly what we specify. If we don’t include an amplitude envelope, the release method will
have no effect. Similarly, if we don’t include an output UGen, the Synth will not make any sound.
(
SynthDef.new(\test, {
arg freq = 350, amp = 0.2, atk = 0.01, dec = 0.3,
slev = 0.4, rel = 1, gate = 1, out = 0;
var sig, env;
env = EnvGen.kr(
Env.adsr(atk, dec, slev, rel),
gate,
doneAction: 2
);
sig = SinOsc.ar(freq + [0, 1]);
sig = sig * env;
sig = sig * amp;
Out.ar(out, sig);
}).add;
)
x = Synth(\test);
x.set(\freq, 450);
x.set(\amp, 0.5);
To instantiate a Synth whose initial argument values differ from the SynthDef defaults, we
provide an array of name-value argument pairs, depicted in Code Example 2.32, similar to the
C h a p t er 2: Ess en t i a ls o f M a k i n g S o u n d 69
approach shown in Code Example 2.7. The array is the second argument that Synth.new()
expects, so no args: keyword is needed.
x.set(\gate, 0);
It’s difficult to overstate the value of arguments in a SynthDef. Even if you declare an argu-
ment but never manipulate it, the fact that it exists provides the flexibility to do so, if you
change your mind. If you want your cake frosting to be sometimes vanilla and sometimes
chocolate, you’ll need to declare a flavor argument.
At this point, we’re able to start making somewhat interesting and musical sounds,
for example, by using iteration to create tone clusters. A short example appears in Code
Example 2.33. Modifying this example is left as an open exercise to the reader.
(
// return an array of four Synths, assigned to x
x = [205, 310, 525, 700].collect({ |f|
Synth.new(\test, [\freq, f, \amp, 0.1]);
});
)
Despite all its arguments, this SynthDef represents a relatively small book of recipes. It won’t
allow us to play an audio sample or add a reverb effect. However, no one SynthDef should be
designed to produce every imaginable sound, just as no single recipe book will provide you
with every cake recipe known to humankind. Indeed, as your projects become more complex,
they will necessitate a modest collection of SynthDefs, each with a different purpose, just as
you might accumulate a library of cake cookbooks over time. Creating SynthDefs flexible
enough to produce a wide variety of sounds, but not so complex that they become unwieldy,
is a skill that develops over time. Nevertheless, there are no hard rules, and there may be
situations that call for an unusually large/complex SynthDef, and others that call for more
minimal design.
70 SuperCollider for the Creative Musician
(
SynthDef(\rand_lang, {
var sig = SinOsc.ar({exprand(200, 2000)} ! 30);
sig = Splay.ar(sig) * 0.05;
Out.ar(0, sig);
}).add;
)
(
SynthDef(\rand_ugen, {
var sig = SinOsc.ar({ExpRand(200, 2000)} ! 30);
sig = Splay.ar(sig) * 0.05;
Out.ar(0, sig);
}).add;
)
midiratio and ratiomidi are similarly useful methods that convert back and forth between
an interval, measured in semitones, and the frequency ratio that interval represents. For ex-
ample, 3.midiratio represents the ratio between an F and the D immediately below it, be-
cause these two pitches are three semitones apart. Negative semitone values represent pitch
movement in the opposite direction.
Similarly, when expressing signal amplitude, a normalized range between zero and one isn’t al-
ways the most intuitive choice. The ampdb method converts a normalized amplitude to a decibel
value and dbamp does the opposite. A value of zero dB corresponds to a nominal amplitude value
of one. If your audio system is properly calibrated, a decibel value around -20 dB should produce
a comfortable monitoring level, and a typical signal will become inaudible around -80 dB.
For efficiency reasons, it is preferable not to build these methods into a SynthDef, and instead
call them when creating or modifying a Synth, so that the server does not have to repeatedly
perform these calculations. A few examples appear in Code Example 2.34.
72 SuperCollider for the Creative Musician
(
SynthDef.new(\test, {
arg freq = 350, amp = 0.2, atk = 0.01, dec = 0.3,
slev = 0.4, rel = 1, gate = 1, out = 0;
var sig, env;
env = EnvGen.kr(
Env.adsr(atk, dec, slev, rel),
gate,
doneAction: 2
);
sig = SinOsc.ar(freq + [0, 1]);
sig = sig * env;
sig = sig * amp;
Out.ar(out, sig);
}).add;
)
x.set(\gate, 0);
FIGURE 2.11 Several graphical server tools. Clockwise from upper left: Stethoscope, Server Meters,
Server GUI, Volume Slider, Node Tree, and FreqScope (also called Spectrum Analyzer or Frequency
Analyzer).
(
x = {
var sig, freq;
freq = SinOsc.kr(0.2).exprange(200, 800).poll(20);
sig = SinOsc.ar(freq);
sig = sig * 0.2;
sig = sig ! 2;
}.play;
)
loud sounds. If the output of the UGen is a multichannel signal, the result will be a multi-
channel plot.
Plotting a UGen function returns an instance of a class named Plotter, a graphical display
that responds to a variety of methods that alter its appearance. These changes can be invoked
through code, but it is simpler to rely on a handful of keyboard shortcuts, enabled when a
signal plot is the topmost window. A list of keyboard shortcuts appears in the Plotter help
file. If a plot contains an especially large number of data points, attempts to move or resize the
window may produce sluggish behavior. UGen plots are most nimble and useful when plot-
ting a relatively small set of data.
2.9.3 STETHOSCOPE
The Stethoscope class provides a real-time digital oscilloscope, useful for visually monitoring
the shape and behavior of a signal. Assuming the server is booted, this tool can be created by
evaluating:
s.scope;
By default, the oscilloscope monitors audio busses 0 and 1. In a typical setup, anything sent
to loudspeakers will appear on the scope. Like Plotter, a Stethoscope instance responds to
methods that alter its behavior and appearance, most of which can be invoked through mouse
interaction or keyboard shortcuts. A table of these shortcuts appears in the Stethoscope
help file.
Periodic signals may appear to be unstable or jumpy, or to move forward/backward on
the oscilloscope. This is an example of the “wagon wheel effect,” which is the same phe-
nomenon that causes tires to appear as if they are spinning backward or standing still in car
commercials. The illusion results from two periodic processes being superimposed. In our
case, the frame rate of the scope interferes with the apparent frequency of the signal. You can
attempt to bring these two periodic processes into harmony by adjusting horizontal zoom level
using the bottom slider. When properly “tuned,” the signal movement will appear to stand
still, and it becomes much easier to understand its shape and behavior.
2.9.4 FREQSCOPE
The FreqScope class is like Stethoscope but provides a real-time spectral analyzer instead of
a waveform display. This tool is created by evaluating:
FreqScope.new;
C h a p t er 2: Ess en t i a ls o f M a k i n g S o u n d 75
A FreqScope can only monitor one channel at a time and analyzes audio bus zero by default.
It’s possible to monitor a different bus using the number box on the right-hand side of the an-
alyzer window.
s.volume.gui;
If you adjust this slider and then close the slider window, the volume change will remain in
effect.
s.makeGui;
Clicking the “record” button will cause SC to begin recording any signal that plays on busses
0 and 1. When clicked again, recording will stop and an audio file will be written to your hard
drive. The post window displays the path to the newly created audio file, and the location of
your default recording directory can be printed by evaluating:
Platform.recordingsDir;
Your recording may include a second or two of silence at the beginning of the file, on account
of time passing between clicking the button and running your sound code, but this silence is
easily trimmed in any waveform editor. The recording process can also be started and stopped
by evaluating s.record and s.stopRecording.
s.plotTree;
When a Synth is created, it will be represented by a white box that appears on the node tree
window (try this yourself!). The node tree can be useful for sleuthing out certain types of
problems. For instance, if you’re trying to create a sound but don’t hear anything, check the
node tree. If you see the white box that represents your Synth, you can be fairly certain the
76 SuperCollider for the Creative Musician
problem is related to something in your SynthDef function. If you don’t see a white box, then
it means the problem is related to the code meant to create the Synth.
Notes
1 At the time of writing, detailed instructions for installing SC and its software dependencies on Linux
systems can be found on the SC GitHub page: https://github.com/supercollider/supercollider/wiki/
Installing-SuperCollider-from-source-on-Ubuntu.
2 It is also possible to designate a point earlier in the envelope as the “loop node.” When a loop node is
specified in addition to a release node, the envelope will loop the section between these two nodes as
long as the gate remains open. The envelope behavior described here assumes that only a release node
has been specified. See the Env help file for more information on setting a loop node.
3 “Ambisonics” refers to a mathematical framework for encoding sound information in a three-
dimensional format, irrespective of any particular loudspeaker setup. It was chiefly pioneered by
Michael Gerzon during the 1970s. The Ambisonic Toolkit, developed by researchers at DXARTS
at the University of Washington, provides a set of additional tools and classes for working with
Ambisonic sound in SC: https://www.ambisonictoolkit.net/.
PA R T II
CREATIVE TECHNIQUES
In this second part, we move beyond the fundamentals of interacting with SuperCollider and
take a closer look at well-established techniques and workflows that are often cornerstones
of electronic music-making and creative sound design projects. Specifically, the next several
chapters discuss synthesis, sampling, sequencing, signal processing, options for external con-
trol, and graphical user interface design. These topics can be thought of as the component
parts that constitute large-scale creative works, and they should provide you with lots of ideas
to play with.
CHAPTER 3
SYNTHESIS
3.1 Overview
Synthesis refers to creative applications of combining and interconnecting signal generators,
typically relying on oscillators and noise, with a goal of building unique timbres and textures.
This chapter explores synthesis categories, many of which are accompanied by Companion
Code files designed to promote further exploration and deeper understanding.
Traditionally, additive synthesis refers to summation of sine waves. The advantage and
popularity of the sine wave is related to the fact that any periodic vibration can be expressed
as a sum of sines provided with the proper frequencies, amplitudes, and phases. Thus, we can
view the sine wave as the fundamental “building block” of more complex periodic waveforms.
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.003.0003
80 SuperCollider for the Creative Musician
When we sum a collection of sine waves whose frequencies are all integer multiples of a
common value, the result is a waveform in which the component sines perfectly align with
one another. In this case, the tones coalesce and become more difficult to perceive on an indi-
vidual basis, and we tend to experience a sense of pitch determined by this common multiple,
called the fundamental frequency.
The simplest way to approach additive synthesis in SC is to manually sum several signals
together, demonstrated in Code Example 3.1, and the first 0.01 seconds of the code in this
example are visualized in Figure 3.2.
(
{
var sig;
sig = SinOsc.ar(200, mul: 0.2);
sig = sig + SinOsc.ar(400, mul: 0.1);
sig = sig + SinOsc.ar(600, mul: 0.05);
sig = sig + SinOsc.ar(800, mul: 0.025);
sig = sig ! 2;
}.play;
)
(
{
var sig = 0, freqs = [200, 400, 600, 800];
freqs.do({ |f, i|
sig = sig + SinOsc.ar(f, mul: 0.2 / 2.pow(i));
});
sig = sig ! 2;
}.play;
)
82 SuperCollider for the Creative Musician
(
{
var sig, freqs = [200, 400, 600, 800];
sig = freqs.collect({ |f, i|
SinOsc.ar(f, mul: 0.2 / 2.pow(i));
});
sig = sig.sum;
sig = sig ! 2;
}.play;
)
(
{
var sig, harm;
harm = LFTri.kr(0.1, 3).range(1, 50);
sig = Blip.ar([80, 81], harm);
sig = sig * 0.1;
}.play;
)
Klang represents a bank of summed sine oscillators. The frequencies are fixed, but the design is
more computationally efficient than creating individual sines. Klang provides arguments that
scale and shift the frequencies, but these values are fixed once the Synth is created. Klang is a
C h a p t er 3: Sy n t h esis 83
slightly unusual UGen in that the oscillator parameters must be specified as a Ref array containing
three internal arrays for frequencies, amplitudes, and phases. The general purpose of a Ref array
in the context of UGen functions is to prevent the multichannel expansion that would ordinarily
occur. A Ref can be created using new, or the backtick character can be used as a syntax shortcut:
Ref.new([[200, 400, 600, 800], [0.4, 0.2, 0.1, 0.05], [0, 0, 0, 0]]);
`[[200, 400, 600, 800], [0.4, 0.2, 0.1, 0.05], [0, 0, 0, 0]];
(
{
var sig;
sig = Klang.ar(
`[ // <- note the backtick character
Array.exprand(40, 50, 8000).sort,
Array.exprand(40, 0.001, 0.05).sort.reverse,
Array.rand(40, 0, 2pi)
]
);
sig = sig ! 2;
}.play;
)
(
{
var sig, freqs, amps, phases;
freqs = Array.exprand(40, 50, 8000).sort;
amps = Array.exprand(40, 0.005, 0.2).sort.reverse;
phases = Array.rand(40, 0, 2pi);
sig = DynKlang.ar(`[ // <- note the backtick character
freqs * LFNoise1.kr(0.02 ! 40).exprange(0.25, 2),
amps * LFNoise1.kr(1 ! 40).exprange(0.02, 1),
phases
84 SuperCollider for the Creative Musician
]);
sig = sig ! 2;
}.play;
)
(
{
var sig, mod;
mod = SinOsc.ar(4, 3pi/2).range(0, 3000);
sig = Pulse.ar([90, 91]);
sig = LPF.ar(sig, 200 + mod);
sig = sig * 0.1;
}.play;
)
A word of caution: In modulation synthesis, the sine wave is frequently favored as a mod-
ulator because of its spectral simplicity, smooth shape, and predictable behavior. Other
waves, such as sawtooth, pulse, and triangle waves, have hard corners and/or abrupt
changes in their wave shape. When one of these signals is used as a modulator, certain
configurations may produce gritty, glitchy results, which may include clicks, pops, and
C h a p t er 3: Sy n t h esis 85
other types of digital noise. The desirability of these sounds is subjective, but the issue is
further compounded by mathematical nuances of backend UGen design. For example, it is
not uncommon for the frequency of a carrier oscillator to become negative when modulated.
SinOsc has a stable response to negative frequencies, equivalent to inverting the polarity
of the waveform (e.g. multiplication by negative one). By contrast, the amplitude of Pulse
substantially increases as its frequency approaches zero, and Saw accumulates positive DC
offset as its frequency becomes negative. Even worse, the output of LFTri, LFCub, and
LFPar will rapidly approach negative infinity when their frequency argument is negative.
Most filter UGens exhibit similarly explosive behavior when their frequency argument
is near or below zero. The takeaway is that UGens are not freely interchangeable! These
discrepancies underscore that using a programming language to create sound demands
a measure of caution. When dealing with experimental modulation configurations, you
should take steps to avoid harming your speakers/ears, such as using UGens that filter out
DC bias and limit amplitude (e.g., LeakDC and Limiter), and turning down your system
volume before evaluating dubious code.
(
{
var sig;
sig = ...some chaotic/unpredictable signal...
sig = Limiter.ar(sig);
sig = LeakDC.ar(sig);
}.play;
)
(
{
var sig, mod, modHz;
modHz = XLine.kr(1, 150, 10);
mod = SinOsc.ar(modHz).range(0, 1);
sig = SinOsc.ar(750, mul: mod);
sig = sig * 0.2 ! 2;
}.play;
)
FIGURE 3.3 Waveforms and spectra corresponding to Code Example 3.8, at the moment when the
modulator frequency reaches its peak.
Ring modulation (RM) is a nearly identical variation of AM. In RM, the output range of the
modulator is bounded between ±1. The spectral consequence of this change is that the carrier
is no longer present in the output spectrum. Similar examples appear in Code Example 3.9 and
Figure 3.4. Although “classic” AM and RM involve sine waves, these modulation techniques
can theoretically be applied to any two signals. In either case, we are essentially multiplying
C h a p t er 3: Sy n t h esis 87
one signal by another, and the results may vary significantly, depending on the types of signals
involved. Companion Code 3.3 explores additional ideas related to amplitude modulation.
(
{
var sig, mod, modHz;
modHz = XLine.kr(1, 150, 10);
mod = SinOsc.ar(modHz).range(-1, 1);
sig = SinOsc.ar(750, mul: mod);
sig = sig * 0.2 ! 2;
}.play;
)
FIGURE 3.4 Waveforms and spectra corresponding to Code Example 3.9, at the moment when the
modulator frequency reaches its peak.
(
{
var sig, mod;
mod = SinOsc.ar(1050).range(0, 1); // audio rate modulator
sig = SinOsc.ar(750);
sig = sig * mod * 0.2 ! 2;
}.play;
)
(
{
var sig, mod;
mod = SinOsc.kr(1050).range(0, 1); // control rate modulator
sig = SinOsc.ar(750);
sig = sig * mod * 0.2 ! 2;
}.play;
)
Even if the frequency of a modulator is low, using an audio rate modulator is a perfectly rea-
sonable choice. Doing so will ensure that the carrier and modulator have equal mathematical
precision and help guarantee the highest-quality sound. Audio rate signals require more CPU
power, but the additional load is rarely consequential.
Code Example 3.11 demonstrates a simple FM configuration. The carrier has a base
frequency of 750 Hz, offset by a modulator with an amplitude of 300. Thus, the fre-
quency of the carrier ranges from 450 to 1,050 Hz. The modulator frequency gradu-
ally increases over ten seconds, showcasing the spectral transformation that takes place.
You can visualize this transformation by first opening the spectrum analyzer with
FreqScope.new.
(
{
var sig, mod, modHz;
modHz = XLine.kr(1, 150, 10);
mod = SinOsc.ar(modHz, mul: 300);
sig = SinOsc.ar(750 + mod);
sig = sig * 0.2 ! 2;
}.play;
)
The realm of possibilities expands when additional oscillators are incorporated into a mod-
ulation network. Two possible configurations include multiple modulators, arranged in se-
ries or in parallel. The former describes a chain of signals, in which a modulator influences
another modulator, which in turn influences a carrier, and the latter describes a situation in
which multiple modulators are summed before influencing a carrier. Code Example 3.12
provides an example of each configuration. In both examples, the frequency of one modu-
lator is intentionally slow so that its influence is easily distinguished from the other modu-
lator. Additional ideas for simple and complex FM modulation are featured in Companion
Code 3.4 and 3.5.
C h a p t er 3: Sy n t h esis 91
(
{
var sig, mod1, mod2;
mod2 = SinOsc.ar(0.2, mul: 450);
mod1 = SinOsc.ar(500 + mod2, mul: 800);
sig = SinOsc.ar(1000 + mod1);
sig = sig * 0.2 ! 2;
}.play;
)
(
{
var sig, mod1, mod2;
mod2 = SinOsc.ar(0.2, mul: 450);
mod1 = SinOsc.ar(500, mul: 800);
sig = SinOsc.ar(1000 + mod1 + mod2);
sig = sig * 0.2 ! 2;
}.play;
)
(
{
var sig, mod;
mod = LFTri.ar(0.3).range(0, 1);
sig = Pulse.ar(100, width: mod);
sig = sig * 0.2 ! 2;
}.play;
)
(
{
var sig, mod;
mod = LFTri.ar(0.3).range(0, 1);
sig = VarSaw.ar(200, width: mod);
sig = sig * 0.2 ! 2;
}.play;
)
Modulation synthesis provides an opportunity to discuss the tangential topic of aliasing vs.
anti-aliasing UGens. Some UGens have low-frequency (LF) counterparts, such as Saw/LFSaw
and Pulse/LFPulse. Some, like SinOsc, have no LF counterpart. Others, like LFTri, have
no non-LF counterpart.
What’s the difference between LF and non-LF UGens? Simply, LF UGens are designed
to run at low frequencies, often below 20 Hz. They will alias at high frequencies, but pro-
duce clean and geometrically precise waveshapes, and therefore make excellent low-frequency
modulators. Non-LF UGens have internal design features which prevent or filter out aliasing
at higher frequencies. These UGens produce a clean sound throughout the entire audible
spectrum, but their waveshapes may have geometrical quirks or irregularities that reduce their
value as modulators. In Code Example 3.14, the frequency of an anti-aliasing pulse wave
increases over ten seconds. If this instance of Pulse is replaced with LFPulse, the sound will
alias at higher frequencies, producing a distinctive fluttering sound.
(
{
var sig, freq;
freq = XLine.kr(20, 8000, 10, doneAction: 2);
sig = Pulse.ar(freq); // r
eplace with LFPulse and notice the
difference
C h a p t er 3: Sy n t h esis 93
The takeaway is that anti-aliasing UGens are not freely substitutable with their LF counterparts.
UGens prone to aliasing tend to make better modulators, while anti-aliasing UGens make
better carriers, although there are some exceptions (SinOsc makes a fine carrier or modulator
at all frequencies). Table 3.2 provides a list of common oscillators categorized by waveshape
and whether they will exhibit aliasing.
TABLE 3.2 A list of common oscillators grouped by waveshape and aliasing behavior.
formula, in which n0, n1, and so on represent sample values. This format improves efficiency
at the cost of allocating a larger buffer.
Why this unusual format? In the rare case where the frequency of an oscillator equals the sam-
pling rate divided by the size of the wavetable, the process of generating a signal is simple: the
oscillator simply retrieves one table value per sample. For other frequencies, table values will
not align with the sample clock, and the oscillator must interpolate between successive wave-
table values to approximate “in-between” values. By storing the data in wavetable format,
some of the burden of calculation is lifted from the server. There are several ways to create
raw wavetable shapes, but they must be converted to wavetable format (using asWavetable
or asWavetableNoWrap) and stored in a buffer before they can be used by a wavetable UGen.
(
~wt = Signal.sineFill(8192, [1]
, [0]).asWavetable;
b = Buffer.loadCollection(s, ~wt);
)
There is a limit to the number of buffers that can be allocated (the default is 1024). If Buffer.
loadCollection(s, ~wt) is evaluated a second time, the server will allocate a new buffer,
without deallocating the previous, even if the same wavetable and variable name are used.
When you are finished with a buffer, it is sensible to free it with b.free. Quitting and rebooting
C h a p t er 3: Sy n t h esis 95
the audio server has the side effect of deallocating all buffers. To conserve space, code examples
omit a buffer-freeing expression; it’s assumed that the user will handle this responsibility, if
needed. Wavetable creation and playback techniques are explored in Companion Code 3.6.
Code Example 3.16 creates four consecutive wavetables and uses iteration to load and plot
them. Note that lace is an array method that creates a new array by interleaving elements of
multiple starting arrays, which in this case inserts zeroes between non-zero amplitudes, thus
zeroing the amplitudes of even-numbered harmonics.
~wt = [
Signal.sineFill(8192, 1 ! 4, 0 ! 4),
Signal.sineFill(8192, 1 / (1..50), 0 ! 50),
Signal.sineFill(
8192,
[1 / (1, 3..50), 0 ! 25].lace(50),
0 ! 50
),
Signal.sineFill(
8192,
Array.exprand(50, 0.001, 1).sort.reverse,
{rrand(0, 2pi)} ! 50
),
];
96 SuperCollider for the Creative Musician
Using the wavetables in Code Example 3.16, we can morph from one to another with VOsc,
demonstrated in Code Example 3.17. In this example, a sine wave modulates the buffer posi-
tion index, which sweeps back and forth across these four tables every 20 seconds. Note that
it’s sensible to limit the buffer position so that it never equals the bufnum of the last buffer in
the consecutive block, implemented by setting the upper boundary of bufmod as 2.999 instead
of 3. If this value equals the highest bufnum, SC will attempt to access that buffer and the
buffer whose bufnum is one greater, in order to calculate a weighted average between them. If
no such buffer exists, VOsc will fail and its output signal will become silent. As an alternative
to constraining the buffer index, we can allocate one extra buffer using allocConsecutive,
but leave it empty. Companion Code 3.7 continues an exploration of VOsc techniques.
(
{
var sig, bufmod;
bufmod = SinOsc.kr(0.05, 3pi/2).unipolar(2.999);
sig = VOsc.ar(b[0]
.bufnum + bufmod, 200);
sig = sig * 0.1 ! 2;
}.play;
)
3.4.3 WAVESHAPING
The Shaper UGen offers a variation on previously described techniques. In the case of Osc
and VOsc, the internal process that reads wavetable data is a repeating linear ramp signal. We
can indirectly influence this signal by modulating its phase, but we can’t replace it with a dif-
ferent signal. Shaper allows this substitution, accepting a wavetable buffer and an arbitrary
input signal. The input signal is used as an index into the wavetable, treated as a transfer func-
tion, which maps input samples onto new output samples. Transfer functions typically have
an input and output range bounded between ±1.
C h a p t er 3: Sy n t h esis 97
The identity transfer function (the function that applies no change to its input signal)
is represented by the Cartesian function y = x, where x is a sample value from the input
signal, and y is the corresponding output sample. Other simple examples include the line
y = –x, which inverts the input signal, and the line y = x / 2, which reduces amplitude by
half. Nonlinear transfer functions produce more interesting results. For example, an S-shaped
transfer function pushes input values closer to ±1 and therefore has a “soft-clipping” effect on
an input waveform. These transfer functions are illustrated in Figure 3.6.
The process of creating wavetables for Shaper is slightly different from that for
other wavetable UGens. With Osc/VOsc, the wavetable is treated as a cyclic entity.
Consequentially, the last and first table values are treated as “adjacent” when an interpo-
lation calculation is needed between these values. However, with Shaper, the wavetable is
treated as a static transfer function, rather than a repeating shape, which means interpo-
lation between the first and last value is a meaningless operation that may produce unin-
tended results. For this reason, one extra value is placed at the end of the transfer function,
to facilitate correct interpolation behavior. In terms of implementation, the starting Signal
should have a size that is a power of two plus one. Then, asWavetableNoWrap converts
the data to wavetable format such that the final size is 2 * (signalSize – 1) (see Code
Example 3.18). When creating transfer functions, it is sensible to create a shape that passes
through the point (0, 0), so that an input value of zero produces an output value of zero,
which avoids problems related to DC offset. Companion Code 3.8 explores additional
ideas related to Shaper.
98 SuperCollider for the Creative Musician
(
~wt = Env.new([-1, 0, 1], [1, 1], [4, -4]).asSignal(8193);
b = Buffer.loadCollection(s, ~wt.asWavetableNoWrap);
)
(
{
var sig, index;
index = SinOsc.ar(200);
sig = Shaper.ar(b, index);
sig = sig * 0.2 ! 2;
}.play;
)
Wavetable synthesis is a powerful tool, and serves as the basis for many synthesis platforms
and plug-ins. Among its most appealing aspects is that it allows explicit creation of detailed,
highly customized waveshapes, a process made considerably more difficult when restricted to
mathematical operations on a small handful of standard waveshapes. In addition, wavetable
synthesis tends to be relatively efficient in comparison to some other synthesis techniques, in-
viting rich, immersive timbres at relatively low cost.
producing a sudden loud sound. Usually, such an explosion stems from a frequency value that is
negative, too close to zero, or above the Nyquist frequency. Additionally, practice good creative
audio hygiene: before evaluating code in uncertain situations, apply a limiter, mute your system
volume while watching the server meters, remove your headphones, and so on.
Subtractive synthesis is the audio equivalent of sculpting. A sculptor begins with a large,
unformed mass of material, and selectively removes parts to produce a desired result. In a sense,
subtractive synthesis is the opposite of additive synthesis, where we begin with nothing, and con-
struct a spectrum, piece-by-piece. In subtractive synthesis, we begin with a spectrally rich signal,
and use filters to attenuate certain frequency regions, so that only the desired content remains.
{WhiteNoise.ar(0.1 ! 2)}.play;
{PinkNoise.ar(0.1 ! 2)}.play;
{BrownNoise.ar(0.1 ! 2)}.play;
100 SuperCollider for the Creative Musician
FIGURE 3.7 A spectrum analysis of white, pink, and brown noise, using a logarithmic frequency scale.
REAPER is a registered trademark of Cockos Incorporated, used with permission.
C h a p t er 3: Sy n t h esis 101
SC also includes several low-frequency noise generators, whose waveforms are depicted in
Figure 3.8. These UGens are commonly used as modulators for other signals but can also be
used as audio-rate noise generators. Unlike white/pink/brown noise, these generators include a
frequency argument, which determines the rate at which they select random values. LFNoise0
is a non-interpolating noise generator, also called a “sample-and-hold” signal. Each time it
selects a random value, it holds that value until the next is generated, producing a waveform
with the appearance of a city skyline. LFNoise1 linearly interpolates between random values
to produce random ramps, and LFNoise2 uses quadratic interpolation.1
FIGURE 3.8 A 0.2-second plot of three different low-frequency noise generators running at 100 Hz.
A good way to hear the difference between these three types of noise generators is to use
them as frequency inputs for periodic generators, as shown in Code Example 3.20. LFNoise0
creates what could reasonably be called the quintessential “computer music” sound.
102 SuperCollider for the Creative Musician
FNoise0 USED AS A
CODE EXAMPLE 3.20: L
FREQUENCY MODULATOR.
(
{
var sig, freq;
freq = LFNoise0.kr(8).exprange(150, 2000);
sig = SinOsc.ar(freq) * 0.2 ! 2;
}.play;
)
The frequency argument of these noise generators provides a spectral dimension. As this
frequency argument approaches the sampling rate, all three of these UGens become prac-
tically indistinguishable from white noise, but at lower frequencies, they have discernable
sonic personalities. Smoother interpolation produces less high-frequency energy. If using
these noise generators as audio-rate sources, or if modulating their frequency argument, it’s
usually better to use their “dynamic” counterparts (designated with a capital letter D in the
UGen name). A comparison appears in Code Example 3.21. These dynamic versions are less
computationally efficient but respond consistently and uniformly to the full range of audible
frequencies. Note that the dynamic version of LFNoise2 is LFDNoise3, which uses cubic
interpolation.
the other. Band-pass filters preserve a range of spectral content around a center frequency
while attenuating spectral content outside of the band, and band-reject filters have the oppo-
site effect.
The UGens LPF, HPF, BPF, and BRF provide an option for each of these four categories.
LPF and HPF accept an input signal and a cutoff frequency. BPF and BRF accept input signal,
a center frequency, and an additional argument named rq, short for “reciprocal quality.”
Quality, often shortened to “Q,” is a measure of the selectivity of a filter, which correlates to its
tendency to exhibit resonant behavior. In the case of band-pass and band-reject filters, Q is the
ratio of center frequency to bandwidth. For example, if the center frequency is 1,000 Hz and
the bandwidth is 100 Hz, the quality of the filter is 1,000/100 = 10. A higher quality means
a narrower band and a more selective filter. In SC, filters expect this value to be specified as
the reciprocal of Q (bandwidth divided by center frequency), a design choice which slightly
increases UGen efficiency. An rq equal to one represents a nominal base quality, which
increases as rq approaches zero. Code Example 3.22 includes code examples for each of these
four filter UGens.
(
{
var sig, cutoff;
cutoff = LFTri.kr(0.1, 3).exprange(100, 10000);
sig = PinkNoise.ar(1);
sig = LPF.ar(sig, cutoff) * 0.25 ! 2; // replace with HPF
}.play;
)
(
{
var sig, cutoff, rq;
cutoff = LFTri.kr(0.1, 3).exprange(100, 10000);
rq = MouseY.kr(0.01, 1, 1).clip(0.01, 1);
sig = PinkNoise.ar(1);
sig = BPF.ar(sig, cutoff, rq) * 0.25 ! 2; // replace with BRF
}.play;
)
In general, the output amplitude of a filtered signal may be noticeably different than the
amplitude of the input signal, depending on the spectrum of the input signal, the type of
filter being used, and the value of the cutoff/center frequency. In the context of mixing and
audio engineering, filters are often used as corrective tools that remove problem frequencies.
In such cases, there is rarely a need to compensate for what is lost. In creative contexts, how-
ever, filters may be used in bolder and more experimental ways. Particularly when the input
signal is broadband noise, a filter may remove a significant amount of spectral content, which
lowers the overall amplitude. It may be desirable to compensate for this loss by boosting the
amplitude of the output signal. The exact amount of boost is context-sensitive and may rely
on some trial-and-error guesswork. For example, when applying a band-pass filter to broad-
band noise, rq values close to zero will drastically reduce the amplitude. In this specific case,
a sensible starting point for a compensation scalar is the reciprocal of the square root of rq. In
Code Example 3.23, vertical mouse movements will alter the quality, but the overall ampli-
tude remains stable.
C h a p t er 3: Sy n t h esis 105
(
{
var sig, cutoff, rq;
cutoff = LFTri.kr(0.1, 3).exprange(100, 10000);
rq = MouseY.kr(0.01, 1, 1).clip(0.01, 1);
sig = PinkNoise.ar(1);
sig = BPF.ar(sig, cutoff, rq, mul: 1 / rq.sqrt) * 0.5 ! 2;
}.play;
)
Filter copies can be applied in series to exaggerate their behavior and increase selectivity,
which can be demonstrated by increasing the iteration count in Code Example 3.24. There
is a point of diminishing returns above which additional filters will consume processing
power without significantly improving the sound. With large numbers of filters in series, the
output signal may begin to exhibit a substantial loss of amplitude, even within the “passed”
spectral band.
(
{
var sig;
sig = WhiteNoise.ar(1 ! 2);
2.do({sig = LPF.ar(sig, 1000)}); // change to 3.do, 4.do, etc.
sig = sig * 0.25;
}.play;
)
RLPF and RHPF are “resonant” variations of LPF and HPF. Like BPF and BRF, these UGens in-
clude a reciprocal quality parameter, capable of producing resonance at the cutoff frequency.
As rq approaches zero, the resonant band narrows, selectivity increases, and the amplitude of
spectral content near the cutoff frequency increases. Thus, it’s important to be cautious with
rq values close to zero, which may distort the output signal (a limiter may be appropriate).
106 SuperCollider for the Creative Musician
RLPF and RHPF are useful in situations where spectral emphasis at the cutoff frequency is
desired. By sweeping the cutoff frequency in certain ways, the output signal acquires a char-
acter that might be described as “wet,” reminiscent of a water droplet falling into a bucket, or
a continuous vocal morph between “ooh” and “ahh.” A resonant low-pass filter is commonly
used with an LFO to create a “wah-wah” effect, often found in synthesized basses and leads.
An example appears in Code Example 3.25.
(
{
var sig, cutoff, freq, randseq;
freq = LFNoise0.kr(1).range(25, 49).round(1).midicps;
cutoff = VarSaw.kr(6, width: 0.1).exprange(50, 10000);
sig = Pulse.ar(freq * [0.99, 1.01]);
sig = RLPF.ar(sig, cutoff, 0.1);
sig = sig * 0.1;
}.play;
)
Resonz and Ringz, demonstrated in Code Example 3.26, are resonant filters that provide
an entryway into modal synthesis. Resonz is a band-pass filter with a constant gain at zero
decibels. This means that as the bandwidth decreases, the sense of resonance increases, but
spectral content at the center frequency will remain at its input level, while surrounding content
is attenuated. It virtually indistinguishable from BPF in terms of usage and sound. Ringz, on
the other hand, has a variable gain that depends on the bandwidth, specified indirectly as a 60
dB decay time. As this decay time increases, bandwidth narrows, a sense of resonance increases,
and spectral content at the center frequency undergoes a potentially dramatic increase in am-
plitude. The difference between Resonz and Ringz is subtle but has significant consequences.
In terms of practical usage, because of its variable-gain design, Ringz is intended to be
driven by single-sample impulses. Even an excitation signal a few samples long has the poten-
tial to overload Ringz and produce a distorted output signal. Longer signals, such as sustained
noise, can technically be fed to an instance of Ringz, but the amplitude of the excitation signal
and/or the output signal must be drastically reduced in order to compensate for the increase in
level, particularly if the decay time is long. Resonz, by contrast, is designed to accept sustained
excitation signals and is more likely to need an amplitude boost to compensate for low levels,
particularly in narrow bandwidth situations. Feeding single-sample impulses into Resonz is
fine, but the level of the output signal will likely be quite low.
(
{
var sig, exc;
exc = Impulse.ar(1);
sig = Ringz.ar(
in: exc,
freq: 800,
decaytime: 1/3
);
sig = sig * 0.2 ! 2;
}.play;
)
(
{
var sig, exc;
exc = PinkNoise.ar(1);
sig = Resonz.ar(
in: exc,
freq: 800,
bwr: 0.001,
mul: 1 / 0.001.sqrt
);
sig = sig * 0.5 ! 2;
}.play;
)
108 SuperCollider for the Creative Musician
Klank and DynKlank encapsulate fixed and dynamic banks of Ringz resonators, offering a
slightly more convenient and efficient option than applying multichannel expansion to an
instance of Ringz (see Code Example 3.27). These UGens require a Ref array (see Section
3.2.2) containing internal arrays of frequencies, amplitudes, and decay times of simulated
resonances. The frequencies can be scaled and shifted, and the decay times can also be scaled.
(
{
var sig, exc, freqs, amps, decays;
freqs = [211, 489, 849, 857, 3139, 4189, 10604, 15767];
amps = [0.75, 0.46, 0.24, 0.17, 0.03, 0.019, 0.002, 0.001];
decays = [3.9, 3.4, 3.3, 2.5, 2.2, 1.5, 1.3, 1.0];
exc = Impulse.ar(0.5);
sig = Klank.ar(
`[freqs, amps, decays], // <- note the backtick character
exc,
);
sig = sig * 0.25 ! 2;
}.play;
)
As a practical example for further study, the end of Companion Code 3.10 uses modal syn-
thesis techniques to recreate the sound of striking an orchestral triangle. The frequencies,
amplitudes, and decay times were derived through methodical observation of real-time spec-
tral analyses of a triangle sample (pictured in Figure 3.10), along with some visual guesswork.
FIGURE 3.10 A spectrum analysis of a struck orchestral triangle. REAPER is a registered trademark of
Cockos Incorporated, used with permission.
C h a p t er 3: Sy n t h esis 109
FIGURE 3.11 A visual depiction of waveform clipping, folding, and wrapping applied to a sine wave.
(
{
var sig, mod;
mod = SinOsc.kr(0.1, 3pi/2).exprange(0.2, 4);
sig = SinOsc.ar(300, mul: mod);
sig = Clip.ar(sig, -1, 1); // replace with Fold.ar or Wrap.ar
sig = sig * 0.2 ! 2;
}.play;
)
Used in isolation, these three waveform operations are coarse tools, relatively incapable of nu-
ance. They tend to produce more satisfying results when used in combination with low-pass
filters, which help soften the hard waveform corners and temper the high frequency content.
The output is bounded between ±1 with reasonable smoothness. As the name implies,
softclip is a gentler version of normal clipping. distort is a similar operation that uses the
following formula, but affects all values:
The results are again bounded between ±1, and a similarly gentle distortion curve is applied.
Lastly, the hyperbolic tangent is a trigonometric function which behaves similarly to softclip
and distort. It’s not essential to understand the mathematical details of this operation, but it
suffices to say that tanh is defined for all numbers between ±infinity, and has an output range
bounded between ±1, making it a useful option for gentle distortion. All three methods result in
additional higher spectral content, but not quite so aggressively as clipping, wrapping, and folding.
FIGURE 3.12 A visual depiction of softclip, distort, and tanh applied to a sine wave.
(
{
var sig, mod;
mod = SinOsc.kr(0.1, 3pi/2).exprange(0.2, 4);
sig = SinOsc.ar(300, mul: mod);
sig = sig.softclip; // replace with 'distort' or 'tanh'
sig = sig * 0.2 ! 2;
}.play;
)
112 SuperCollider for the Creative Musician
FIGURE 3.13 A visual depiction of lincurve applied to a sine wave, using positive and negative curve
values.
(
{
var sig, mod;
mod = SinOsc.kr(0.2, 3pi/2).exprange(1, 15);
sig = SinOsc.ar(300);
sig = sig.lincurve(-1, 1, -1, 1, mod);
sig = LeakDC.ar(sig) * 0.2 ! 2;
}.play;
)
3.7.4 BITCRUSHING
Bitcrushing has become something of a catch-all term that describes an artificial reduction
of the resolution capabilities of a digital audio system. Technically, bitcrushing is a reduction
in vertical resolution, that is, a reduction in the number of discrete amplitude values that can
be assigned to individual samples. It is also possible to emulate a reduction in the sampling
rate, which produces a loss of horizontal resolution. Both types of signal degradation generate
C h a p t er 3: Sy n t h esis 113
additional high spectral content, but bitcrushing also alters a signal’s dynamic range (making
quiet sounds louder), while a sample rate reduction produces aliasing and tends to influ-
ence pitch.
An implementation of bitcrushing is surprisingly simple: we can round an audio signal to
the nearest multiple of a value, imposing a “staircase” shape. To simulate a sample rate reduc-
tion, the Latch UGen can be used, which applies a sample-and-hold operation to its input signal
whenever it receives a trigger. In both cases, the quality of the audio signal will degrade (possibly
quite dramatically), but this type of sound may be stylistically appropriate in some situations.
(
{
var sig, mod;
mod = SinOsc.kr(0.2, 3pi/2).exprange(0.02, 1);
sig = SinOsc.ar(300);
sig = sig.round(mod) * 0.2 ! 2;
}.play;
)
(
{
var sig, mod;
mod = SinOsc.kr(0.2, 3pi/2).exprange(SampleRate.ir/2, SampleRate.ir/100);
sig = SinOsc.ar(300);
sig = Latch.ar(sig, Impulse.ar(mod));
sig = sig * 0.2 ! 2;
}.play;
)
also worth recognizing that the compositional process is more than just the sum knowledge of
these tools; it also involves important decisions about contrast, density, pace, symmetry, use of
silence, and many other factors that determine the where, how, and why of a musical compo-
sition. A synthesized drone, an arpeggiated melody, or some other musical unit can rarely be
considered a musical composition by itself, no matter how detailed and immersive it may be.
What came before it? What comes after? How does its sonic material develop over time? How
does it establish tension, and how does it resolve? What kind of interactive narrative is created
through its existence with other musical elements?
These are questions that can only be answered through patience and self-discovery. For
now, in the interest of providing one last practical application, Companion Code 3.12 brings
together a few concepts from this chapter and synthesizes a few basic percussion sounds: a
kick, snare, and hi-hat. These examples are intended as starting points for further study and
experimentation.
Note
1 As a result of its quadratic algorithm, LFNoise2 occasionally overshoots its range boundaries. It can
be a dangerous choice when used to control filter frequencies. Consider clipping its output or using
a different UGen.
CHAPTER 4
SAMPLING
4.1 Overview
For the purposes of this chapter, sampling refers to creative practices that rely on recorded
sound, typically involving modified playback of audio files stored in blocks of memory on the
audio server. In a sense, sampling and synthesis are two sides of the same signal-generating
coin. Synthesis relies on mathematical algorithms, while sampling is based on the use of con-
tent that has already been produced and captured, but most sound sources found throughout
creative audio practices are rooted in one of these two categories.
Sampling opens a door to a world of sound that is difficult or impossible to create using
synthesis techniques alone. Anything captured with a microphone and rendered to a file in-
stantly becomes a wellspring of creative potential: a recording of wildlife can become a surreal
ambient backdrop, or a recording of a broken elevator can be chopped into weird percussion
samples. Instead of using dozens of sine waves or filters to simulate a gong, why not use the
real thing?
Before loading sampled audio files into software, it’s wise to practice good sample hygiene.
Unnecessary silence should be trimmed from the beginnings and ends of the files, samples
should be as free as possible from background noise and other unwanted sounds, and the peak
amplitudes of similar samples should be normalized to a consistent level. Normalization level is
partly a matter of personal preference, but –12 dBFS is usually a reasonable target, which makes
good use of available bit depth while reserving ample headroom for mixing with other sounds.
4.2 Buffers
In order for a sampled audio file to be usable in SC, the server must be booted and the file
must be loaded into a buffer, which is simply a block of allocated memory in which data can
be stored. Buffers are server-side objects, but creation and interaction with buffers usually
happens through the language-side Buffer class and its various methods. Although buffers
typically store sample values of digital audio files, they are essentially multichannel arrays that
hold floating point values, and therefore may contain data that represents other things, like
envelopes, wavetables, or pitch collections. Once a buffer is allocated on the server, it remains
there, globally accessible to all server processes, until it is explicitly freed (i.e., deallocated) or
if you quit the server (pressing [cmd]+[period] has no effect on buffers). Buffers are a central
class in SC, and a solid familiarity is essential for sampling-based work.
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.003.0004
116 SuperCollider for the Creative Musician
method needs to know the instance of the server on which the buffer will reside, and a string
representing the path to the file. The format should be wav or aiff/aif, although compressed
audio file formats, such as mp3, may also work on newer versions of SC). As a convenience,
you can drag and drop a file into the code editor, and the appropriate string will be generated,
which avoids the need to manually type long paths. After creation, a buffer can be auditioned
using play. Buffers can be visually rendered with plot. This visualization is not nearly as so-
phisticated as that of a dedicated waveform editor, and the plot window may behave sluggishly
for large buffers, but plotting remains a useful feature. A buffer can be deallocated with free.
If multiple buffers have been created, it’s possible to deallocate them all at once by evaluating
Buffer.freeAll.
s.boot;
b = Buffer.read(s, "/Users/eli/Sounds/scaudio/drumsamples/Claps/
clap1.aif");
b.plot;
b.play;
b.free;
The process of allocating a buffer and loading audio samples into it does not happen instanta-
neously. Attempts to create a buffer and access it within the same evaluation may fail. Buffers
live on the server, but we interact with them from the language, which gives rise to timing-
related nuances that may create confusing situations. For this reason, read, plot, and play
are meant to be evaluated one after the other in Code Example 4.1. Executing these actions
one-by-one is a primitive but effective way to avoid problems. More elegant options will be
discussed in later sections of this book.
The standard download of SC includes a “Resources” folder, which contains two sound
files whose primary purpose is to support interactive code examples in buffer-related help files.
The paths to these files vary by operating system, but they can be accessed using Platform, a
utility class that homogenizes cross-platform behavior and code expression. The built-in sound
files can be loaded and played as shown in Code Example 4.2.
C h a p t er 4: Sa m pl i n g 117
(
b = [
Buffer.read(s, Platform.resourceDir ++ "/sounds/a11wlk01-44_1.aiff"),
Buffer.read(s, Platform.resourceDir ++ "/sounds/a11wlk01.wav")
];
)
b[0].play;
b[1].play;
An allocated buffer has several attributes associated with it, which can be queried via language-
side methods, shown in Code Example 4.3. Specifically, each buffer has a unique identifying
integer called a bufnum. Although it’s possible to manually specify buffer numbers as buffers
are allocated, best practice is to allow automatic assignment. Buffers also have a frame count,
equal to the number of samples divided by the number of channels. In the case of a one-channel
buffer, the number of frames equals the number of samples. In a two-channel buffer, the sample
count is twice the frame count, and so on. An illustration of the difference between frames and
samples appears in Figure 4.1. Additional attributes include duration, number of channels, and
the sample rate of the buffer (which may be different from the sampling rate of the audio server).
b.duration;
b.bufnum;
b.numFrames;
b.numChannels;
b.sampleRate;
FIGURE 4.1 An illustration of the relationship between channels, frames, and samples in a two-
channel buffer.
It’s sometimes desirable to load only part of an audio file into a buffer. This can be done
using the third and fourth arguments of read, which correspond to a starting frame index
and a number of frames (startFrame and numFrames). However, a trial-and-error approach
to finding the right frame range is often inferior to using dedicated waveform editing software
to edit and export desired portions as new audio files, which can then be read into SC as is.
The two built-in samples are one-channel audio files. The read method automatically
detects channel size and allocates a buffer with that many channels. However, we may want to
read only one channel of a multichannel audio file, which can be handled with readChannel.
This class method is similar to read but includes a channels argument, depicted in Code
Example 4.4. This value should be an array of channel indices. If the path points to a stereo
file, a value of [0]will read only the left channel, and a value of [1] will read only the right
channel. Mixing a multichannel file to a monophonic format during this reading process is
not easy and not recommended; it’s better to deal with monophonic mixing using summation
within a UGen function.
(
b = Buffer.readChannel(
s,
C h a p t er 4: Sa m pl i n g 119
"path/to/some/stereo/file.aiff",
channels: [0]// only read the left channel
);
)
b.play;
For simplicity, the code examples in this chapter will rely on these built-in audio files, but
substituting your own audio files is possible and encouraged. Supplementary audio files are
included with Companion Code files for this chapter, for the sake of creating more interesting
and varied examples, but substitutions can be made there as well.
(
~drone = [
Buffer.read(s, "/path/to/drone0.wav"),
Buffer.read(s, "/path/to/drone1.wav"),
Buffer.read(s, "/path/to/drone2.wav")
];
~creak = [
Buffer.read(s, "/path/to/creak0.wav"),
Buffer.read(s, "/path/to/creak1.wav"),
Buffer.read(s, "/path/to/creak2.wav")
];
)
120 SuperCollider for the Creative Musician
PathName is a utility class for accessing files and folders on your computer. In combina-
tion with iteration, PathName can automate the process of passing file paths to buffers and
storing the buffers in a collection. We provide PathName with an absolute path to a folder, use
entries to return an array of PathName instances that represent folder contents, and itera-
tively call fullPath on each instance to return each file’s complete path string. This approach,
demonstrated in Code Example 4.6, is especially convenient if all your audio files are stored in
one folder.1 In addition, this code will not need modification if audio files are added, renamed,
or removed from their folder. However, if the folder contains any non-audio files or unrecog-
nized file formats, this code will fail.
(
var folder = PathName.new("/path/to/folder/of/audio/files/");
b = folder.entries.collect({ |file| Buffer.read(s, file.fullPath) });
)
b[0].play;
b[1].play; // etc.
Access via numerical index is well-suited for audio files with similarly numerical file-naming
schemes. In other cases, however, your sample library may already be organized into subfolders
and sub-subfolders, and you may want to preserve this organizational structure as files are
loaded into buffers. The Event class is a different type of collection, distantly related to arrays,
which may be useful in this situation. An Event is an unordered collection, but each stored
item is associated with a unique “key,” designated as a symbol. An Event instance can be
created with the new method, or by using an enclosure of parentheses. Basic usage (without
involving buffers) is shown in Code Example 4.7.
(
b = (); // create an empty Event
b[\abc] = 17; // store three pieces of data in the Event
b[\jkl] = 30.2;
b[\xyz] = [2, 3, 4];
)
The main advantage of storing buffers in an Event is the ability to use meaningful names, rather
than numbers, to identify groups of audio files. An effective strategy is to store arrays of buffers in
an event, pairing each array with a key named according to the immediate parent folder of each set
of audio files. The example in Code Example 4.8 assumes there is a main folder, which contains
some number of subfolders. We imagine these subfolders are named according to the source mate-
rial of the audio files they contain, for example, “wood,” “metal,” and so on. After evaluation, the
Event contains arrays of buffers, each stored at an appropriate symbol key (e.g., \wood, \metal).
(
var folder, subfolders;
b = ();
folder = PathName.new("/path/to/main/folder/");
subfolders = folder.entries; // array of PathName subfolders
subfolders.do({ |sub|
// for each subfolder, iterate over the files
// and load each one into a buffer
var bufArray = sub.entries.collect({ |file|
Buffer.read(s, file.fullPath);
});
// then, store the array in the Event, at a key
// named according to the subfolder name
b[sub.folderName.asSymbol] = bufArray;
});
)
b[\wood][0]
.play; // play the 0th file in the wood subfolder
The code in Code Example 4.8 assumes there are exactly three hierarchical layers: the main
folder, the subfolders, and the audio files within those subfolders. If the directory structure
deviates from this model (e.g., if there are additional layers of subfolders, or if a folder contains
a mix of files and folders, or if a folder contains a file in an invalid format), this code will
fail. The Companion Code provided at the end of this chapter section illustrates additional
refinements capable of anticipating such deviations.
One final problem remains unaddressed. All the previous examples rely on at least one
absolute path. If one of these code examples were run on a different computer—even if the
audio files were also moved to this computer—the absolute path would likely not represent a
122 SuperCollider for the Creative Musician
valid location. The solution is to identify the main folder using a relative path, rather than an
absolute one.
A solid approach is to create a main project folder that contains all your assets, that is,
your code and folder(s) of audio files. For example, if your project folder contains a code file
and a folder named “audio” (which contains some number of subfolders of audio files), the
expression:
"audio/".resolveRelative;
will return a valid absolute path that points to a folder named “audio,” assumed to be present
in the same directory as the code file. Therefore, the expression:
PathName.new("audio/".resolveRelative);
will return a valid instance of PathName through which all subfolders and files can be
accessed. Thus, as long as the main project folder remains intact, code that relies on relative
paths will work correctly on any computer. Companion Code 4.1 continues a discussion of
buffer management strategies and concludes with a robust buffer-reading function that can be
easily transplanted into any sample-based project.
4.3.1 PLAYBUF
PlayBuf is a buffer-reading UGen that includes a modulatable playback rate, can be triggered
to jump to a particular frame, and is capable of automatic looping. At minimum, to produce
a signal, PlayBuf needs the number of audio channels of a buffer, along with its bufnum,
demonstrated in Code Example 4.9. Note that many subsequent code examples in Section 4.3
assume the server is booted and that a one-channel buffer is loaded and stored in the inter-
preter variable b. In these examples, these steps may not be explicitly indicated.
{PlayBuf.ar(b.numChannels, b.bufnum)}.play;
C h a p t er 4: Sa m pl i n g 123
Using function-dot-play is useful for quickly auditioning a sound file, but offers few
advantages over b.play. Creating a SynthDef offers reuse and flexibility, and helps highlight
some common pitfalls. For instance, it may be tempting to declare arguments for the number
of channels and the bufnum (see Code Example 4.10), but evaluating this code produces a
“Non Boolean in test” error.
(
SynthDef(\playbuf, {
arg nch = 1, buf = 0, out = 0;
var sig = PlayBuf.ar(nch, buf);
Out.ar(out, sig ! 2);
}).add;
)
Declaring a bufnum argument (here, named buf) is not related to this error. On the con-
trary, declaring a bufnum argument is almost always a good idea, as it allows us to specify
an arbitrary buffer when creating a Synth. If we instead provided an integer or the expression
b.bufnum inside PlayBuf, a static value would be hard-coded into the SynthDef, restricting
playback to one (and only one) buffer.
The non-Boolean error seems strange at first, since nothing in the SynthDef seems to
require a Boolean value. The error is the result of attempting to define the number of output
channels as an argument. A fundamental design feature of the audio server is that all sig-
nals in a UGen function must have a fixed channel size when the SynthDef is created, so
the numChannels argument must be a static value. In fairness, it appears at first glance
that nch is a static value (it has been initialized to one). However, declared arguments are
not static values—they are signals! Specifically, when an argument is declared, it automat-
ically becomes an instance of a UGen belonging to a small family that includes Control,
AudioControl, TrigControl, and a few others. These UGens are specifically designed to
allow external control mechanisms to influence a signal algorithm, for example, through set
messages. Because signals are dynamic entities whose value can change over time, the server
is unable to determine an appropriate channel size for PlayBuf and reports an error. The
solution is to explicitly specify the number of PlayBuf channels as an integer, as shown in
Code Example 4.11. If your sample library includes a mix of mono and stereo files, a simple
solution is to create two SynthDefs with different channel sizes. If you attempt to play a
buffer with a number of channels that doesn’t match the UGen channel size, SC will post a
warning message and play however many channels it can accommodate (which may be fewer
than expected).
124 SuperCollider for the Creative Musician
(
SynthDef(\playbuf, {
arg buf = 0, out = 0;
var sig = PlayBuf.ar(1, buf); // fixed channel size
Out.ar(out, sig ! 2);
}).add;
)
Because the server automatically assigns bufnums, you should never make guesses or
assumptions about which bufnum corresponds to which buffer. Instead, you should always
provide the bufnum explicitly when creating a Synth, as demonstrated in the last line of Code
Example 4.11.
Like EnvGen, PlayBuf has an inherently finite duration, and therefore includes a
doneAction. If PlayBuf reaches the last frame of a buffer, and if its loop value is zero, it
checks its doneAction. If the doneAction is two, the Synth will free itself. If loop is one,
playback will loop and the doneAction will be ignored. A SynthDef argument can be used to
dynamically modulate the loop behavior, shown in Code Example 4.12.
(
SynthDef(\playbuf, {
arg buf = 0, loop = 1, out = 0;
var sig = PlayBuf.ar(1, buf, loop: loop, doneAction: 2);
Out.ar(out, sig ! 2);
}).add;
)
PlayBuf includes a rate parameter, which indirectly controls the position of an internal
frame pointer, demonstrated in Code Example 4.13. This value is expressed as a ratio, so a
C h a p t er 4: Sa m pl i n g 125
value of two doubles playback speed while also shifting pitch up one octave and cutting play-
back duration in half. A rate of 0.5 has the opposite effects. Negative values reverse playback
direction. A rate of zero freezes the playback pointer, producing silence and usually some
amount of DC offset, depending on the value of the sample where the freeze occurred.
(
SynthDef(\playbuf, {
arg buf = 0, rate = 1, loop = 1, out = 0;
var sig = PlayBuf.ar(1, buf, rate, loop: loop, doneAction: 2);
Out.ar(out, sig ! 2);
}).add;
)
If the rate parameter is one, PlayBuf will read samples from the buffer at the audio server’s
sample rate. If the audio stored in a buffer was recorded at a different sample rate, it will be
resampled to the server’s rate and the perceived pitch of the audio will be altered. For example,
if the buffer contains audio recorded at 44,100 samples per second, and the server is running
at 48,000 samples per second, a PlayBuf with a rate value equal to one will play back audio
with a pitch slightly higher than the original, by a ratio of 48,000 ÷ 44,100 (approximately
1.5 semitones). The most reliable way to guarantee that a rate value equal to one plays the
sample at its original pitch is to scale the rate by an instance of BufRateScale (see Code
Example 4.14), a UGen belonging to a family of buffer utility UGens designed to extract in-
formation from buffers. BufRateScale requires a bufnum and generates a signal whose value
is the buffer’s sample rate divided by the server’s sample rate. When supplied as a rate value for
PlayBuf, it compensates for potential shifts created by buffer/server sample rate mismatches.
BufRateScale and its sibling UGens can run at the control rate or initialization rate. The
control rate is more computationally expensive but able to track changes if one buffer is dy-
namically swapped with another using a set message. Table 4.1 lists various buffer utility
UGens. Within a UGen function, these utility classes are usually a more flexible option than
the language-side methods in Code Example 4.3.
126 SuperCollider for the Creative Musician
(
SynthDef(\playbuf, {
arg buf = 0, rate = 1, loop = 1, out = 0;
var sig;
rate = rate * BufRateScale.kr(buf);
sig = PlayBuf.ar(1, buf, rate, loop: loop, doneAction: 2);
Out.ar(out, sig ! 2);
}).add;
)
It is possible to specify a frame index on which playback will start. When a trigger signal is
received at PlayBuf ’s trigger input, playback will instantaneously jump to the starting frame,
demonstrated in Code Example 4.15. Note the use of a trigger-type argument, introduced in
Chapter 2, which facilitates the ability to repeatedly trigger the frame jump.
(
SynthDef(\playbuf, {
arg buf = 0, rate = 1, t_trig = 1, start = 0, loop = 1, out = 0;
var sig;
C h a p t er 4: Sa m pl i n g 127
x.free;
4.3.2 BUFRD
BufRd (short for “buffer read”) serves the same general purpose as PlayBuf, but with a dif-
ferent design. Like PlayBuf, BufRd requires a static channel size and a bufnum. Whereas
PlayBuf has several inputs that indirectly control the position of its internal frame pointer,
BufRd has only a phase input, which is treated as an explicit frame pointer, and which must
be an audio rate signal.
Line can be used as a frame pointer to create a simple “one-shot” player, which appears in
Code Example 4.16. BufFrames and BufDur are useful for retrieving relevant buffer informa-
tion. Because buffer frame indices begin with 0, the correct end value for Line is the number
of frames minus 1. BufRd has no doneAction, and instead relies on its frame pointer or some
other finite signal to provide an appropriate doneAction, when relevant.
(
SynthDef(\bufrd, {
arg buf = 0, out = 0;
var sig, phs;
phs = Line.ar(0, BufFrames.kr(buf) - 1, BufDur.kr(buf),
doneAction: 2);
sig = BufRd.ar(1, buf, phs);
Out.ar(out, sig ! 2);
}).add;
)
Many new users may look at Code Example 4.16 and ask, “How do you make a Line UGen
loop?” The answer is that Line cannot loop. Instead, a UGen capable of generating a repeating
linear ramp, such as Phasor, is necessary. Phasor, demonstrated in Code Example 4.17,
accepts a start and end value, and includes a rate argument, which represents a value incre-
ment per sample. A rate equal to one will cause Phasor’s output to increase by one for each
sample, resulting in normal playback speed when the sample rates of the server and buffer are
identical. BufRateScale should be used to compensate for potentially mismatched sample
rates. Note that the end value of Phasor is the point at which the signal wraps back to the
start value. The end value is never actually generated, so BufFrames is the correct end value,
rather than the number of frames minus 1.
(
SynthDef(\bufrd, {
arg buf = 0, rate = 1, out = 0;
var sig, phs;
rate = rate * BufRateScale.kr(buf);
phs = Phasor.ar(rate: rate, start: 0, end: BufFrames.kr(buf));
sig = BufRd.ar(1, buf, phs);
Out.ar(out, sig ! 2);
}).add;
)
BufRd has a fourth argument named loop, which is often a source of confusion. Setting
this value to 1 does not automatically cause looping behavior. Instead, loop determines how
BufRd will respond if the frame index signal is outside of a valid range. If loop is 0, BufRd
will clip frame values between 0 and the final frame index of its buffer. If loop is 1, BufRd will
wrap out-of-range frame values between 0 and the final frame index. If frame pointer values
are always within a valid range, the loop parameter is irrelevant.
The final argument of BufRd determines the type of sample interpolation that occurs if
the phase signal points to a buffer location between frames, which commonly occurs when
reading through a buffer faster or slower than normal. The default value of 2 corresponds to
linear interpolation, which is fine for most cases. A value of 4 specifies cubic interpolation,
producing a marginally cleaner sound at a marginally higher computational cost. A value of
1 disables interpolation altogether. Non-interpolation is advisable only in situations where the
buffer and server have the same sample rate, and playback will never deviate from “normal”
speed. Otherwise, the output signal will have an audible reduction in quality. For comparison,
the sample interpolation of PlayBuf is always cubic, regardless of playback rate. Companion
Code 4.3 explores additional creative options involving BufRd.
C h a p t er 4: Sa m pl i n g 129
mess around with recording UGens. However, combinations of recording/playback UGens give
rise to new creative possibilities that tend to gravitate toward live delay effects and live looping.
In practice, a microphone signal is one of the most common things to feed into a re-
cording/playback setup—for example, to gradually build up a layered, looped texture during
a live performance. SoundIn provides access to microphone signals, but this UGen is best
introduced alongside a discussion of how the server uses busses to pass signals from one Synth
to another, so this topic is covered in Chapter 6. In this chapter, the signals we’ll record into
buffers will be generated using synthesis techniques, or by playing back other buffers.
4.4.1 RECORDBUF
RecordBuf resembles PlayBuf, with a few notable differences. At minimum, RecordBuf needs
the signal that will be recorded, and the bufnum of the buffer in which recording will take place.
The number of channels of the signal must match the number of channels in the buffer, or the
recording process will fail. Like PlayBuf, RecordBuf has an internal frame pointer, but its rate
cannot be fluidly manipulated. Instead, the pointer moves forward through a buffer at the sam-
pling rate when run is positive, backward at the same speed when negative, and the pointer stops
when run is 0. RecordBuf includes a doneAction, checked when the pointer is on the last frame,
but ignored if loop is 1. When a trigger is received at the trigger input, the frame pointer
jumps to frame 0. Unlike PlayBuf, the jump target is always frame 0 and cannot be modulated.
Basic uses of RecordBuf often begin with allocating an empty buffer via alloc, which
needs the name of the server, a frame count, and a number of channels. To specify a duration
in seconds, the frame count can be expressed by multiplying the duration by the sample rate.
Once a buffer is allocated, a UGen function containing RecordBuf can be used to write a
signal into it. Code Example 4.18 records an enveloped sine tone into a buffer. Note that signal
attenuation occurs after recording, so that the signal is captured at full amplitude. The last
line of code creates an attenuated, two-channel version of the signal, strictly for monitoring
purposes, which has no impact on the recording process. RecordBuf needs no terminating
doneAction, which is handled instead by the signal envelope. Once recording is complete, we
can plot and play the buffer as we normally would. The zero method can be used to clear the
contents of a buffer, replacing all samples with zeroes.
b =
Buffer.alloc(s, s.sampleRate * 0.5, 1); // 1/2 sec mono buffer
(
{
var sig = SinOsc.ar(ExpRand(200, 1200));
sig = sig * Env.perc(0.01, 0.49).ar(2);
RecordBuf.ar(sig, b);
sig = sig * 0.25 ! 2;
C h a p t er 4: Sa m pl i n g 131
}.play;
)
b.plot;
Note also in Code Example 4.18 that there’s no need to capture the output signal of RecordBuf
in a variable. The chief purpose of this UGen is to write a signal to a buffer, which happens
regardless of whether it is stored in a variable.
Recording and monitoring are two separate, fully independent processes. We don’t al-
ways want to record the signal we are monitoring, and vice-versa. Even if the UGen function
outputs silence to loudspeakers (by adding a 0 as the final statement of the function), re-
cording will still take place, as shown in Code Example 4.19.
(
{
var sig = SinOsc.ar(ExpRand(200, 1200));
sig = sig * Env.perc(0.01, 0.49).ar(2);
RecordBuf.ar(sig, b);
sig = sig * 0.25 ! 2;
0; // silent output
}.play;
)
b.play(mul: 0.25);
The recLevel and preLevel arguments can be configured to determine whether old buffer
content is overwritten, or if new material is overdubbed (i.e., mixed with old content). These
parameters can also be used to bypass recording. recLevel is a value multiplied by each
sample of the input signal just before recording takes place, and preLevel is a value multiplied
by each sample value that currently exists in the buffer, just before recording takes place.
Technically, RecordBuf always overdubs, but by default, recLevel is 1 and preLevel is 0,
which produces behavior indistinguishable from overwriting. If both arguments are equal
to 1, new content is summed with existing content, and both will be heard on playback. If
recLevel is 0 and preLevel is 1, the incoming signal is silenced and existing buffer content
132 SuperCollider for the Creative Musician
is untouched. Each time the UGen function in Code Example 4.20 plays, it overdubs a new
random tone, while reducing the amplitude of the existing content by half. After evaluating
the recording function several times, you’ll hear several mixed tones when playing the buffer.
(
{ // evaluate this function 4–5 times
var sig = SinOsc.ar(ExpRand(200, 1200));
sig = sig * Env.perc(0.01, 0.49).ar(2);
RecordBuf.ar(sig, b, recLevel: 1, preLevel: 0.5);
sig = sig * 0.25 ! 2;
}.play;
)
The duration of a sound may not be known in advance of recording, so it’s not always pos-
sible to allocate a buffer with the ideal size. To capture the entirety of a finite sound whose
length is unknown, we can allocate a very large buffer (e.g., a minute or more), and use
sample playback techniques that strategically avoid silence that remains at the end of the
buffer. In other cases, it might only be necessary to capture a few seconds of an ongoing
sound, in which case a shorter buffer can be allocated, and RecordBuf can be configured
to loop through it, continuously overdubbing or overwriting as needed. A demonstration
of taking a short recording of a longer sound is featured in Code Example 4.21. After
the recording Synth is freed, the buffer will contain the last half-second of audio that
occurred.
(
x = {
var sig, freq;
freq = TExpRand.ar(200, 1200, Dust.ar(12));
sig = SinOsc.ar(freq.lag(0.02));
RecordBuf.ar(sig, b, loop: 1);
sig = sig * 0.25 ! 2;
C h a p t er 4: Sa m pl i n g 133
}.play;
)
x.free;
y.release(2);
Recording and playback need not occur in separate UGen functions. Buffers are globally avail-
able on the server, so any number of processes can simultaneously interact with a buffer. For
instance, we can move PlayBuf into the previous UGen function and configure RecordBuf
for overdubbing. The result, demonstrated in Code Example 4.22, is a real-time accumulation
of audio layers.
(
x = {
var sig, freq;
freq = TExpRand.ar(200, 1200, Dust.ar(12));
sig = SinOsc.ar(freq.lag(0.02));
RecordBuf.ar(sig, b, recLevel: 1, preLevel: 0.5, loop: 1);
sig = PlayBuf.ar(1, b, BufRateScale.kr(b), loop: 1) * 0.25 ! 2;
}.play;
)
x.release(2);
The preLevel attenuation acts as a stabilizing countermeasure that prevents the overall am-
plitude from spiraling out of control (values greater than 1 are dangerous and inadvisable).
This combination of UGens is a simple model of a digital delay line, closely related to effects
based on delay lines (slapback echo, multi-tap delay, flanger, chorus, reverb, and others).
The multi-tap delay UGen MultiTap is, in fact, built using a combination of PlayBuf and
RecordBuf.
The delay time in Code Example 4.22 can be changed by using a buffer with a different
duration, and the decay time can be increased by setting preLevel closer to 1. When equal to
1, the decay time is infinite. It should be noted that using PlayBuf/RecordBuf in this manner
is a somewhat roundabout way of creating a delay line, a task handled more efficiently by a
family of dedicated delay UGens (explored in Chapter 6), although there is value in under-
standing this conceptual relationship. Companion Code 4.4 explores creative ideas involving
RecordBuf and PlayBuf.
134 SuperCollider for the Creative Musician
4.4.2 BUFWR
Like RecordBuf, BufWr requires a signal and the buffer in which recording should take place.
The internal frame pointer of BufWr does not move automatically, but instead relies on an
audio rate phase signal to determine where samples are recorded. BufWr has a loop argu-
ment, which (like BufRd) determines whether the UGen will clip or wrap frame indices that
are out-of-range.
Basic usage of BufWr begins with allocating a buffer, after which a Line UGen can be
used to advance the frame pointer, and also free the Synth when the process is complete,
demonstrated in Code Example 4.23.
(
{
var sig, phs;
sig = SinOsc.ar([250, 252]);
phs = Line.ar(0, b.numFrames - 1, b.duration, doneAction: 2);
BufWr.ar(sig, b, phs);
sig = sig * 0.25;
}.play;
)
b.play(mul: 0.25);
b.plot;
With freer control over the movement of the recording pointer, it may be tempting to vary
the duration of Line, or perhaps use an entirely different UGen, to modulate the behavior of
the recording process. However, BufWr cannot interpolate between samples like BufRd, and
there is no way to record an input sample “in-between” two adjacent frames. If the speed of
the recording process is varied such that the frame pointer does not align with buffer samples,
BufWr will record each sample to the nearest available frame, either overwriting neighboring
samples, or leaving tiny gaps of silence in the buffer.
For example, if the duration of Line is reduced, the frame pointer will move through
the buffer more quickly, “stretching out” the input signal and resulting in a pitch reduc-
tion when played back at normal speed. In this case, fewer samples are recorded over
the same amount of space, leaving a “breadcrumb trail” of zeroes (depicted in Figure
4.2 and Code Example 4.24). The expected pitch is audible when played back, but the
sound is colored by harmonic distortion. Reducing the duration of Line by a factor
C h a p t er 4: Sa m pl i n g 135
of 9/10, for example, means that every ten buffer samples will contain one zero-value
sample.
FIGURE 4.2 The visual result when using a slightly faster-than-normal frame pointer for BufWr.
(
{
var sig, phs;
sig = SinOsc.ar([250, 252]);
phs = Line.ar(0, b.numFrames - 1, b.duration * 0.9,
doneAction: 2);
BufWr.ar(sig, b, phs);
sig = sig * 0.25;
}.play;
)
Increasing the duration of Line (thus reducing recording speed) has a similar effect, and the
harmonic distortion is more noticeable with higher-frequency input signals. During the re-
cording process, each sample written to the buffer may be overwritten by one or more subse-
quent samples before the recording pointer advances to the next frame. By upscaling the Line
duration by a value of 3.7, we effectively lose 3.7 samples for every one sample that is success-
fully recorded (depicted in Figure 4.3). The result, once again, is a signal with a degraded
quality, demonstrated in Code Example 4.25. The outcome is essentially identical to aliasing;
something similar would occur if the sample rate of the server were reduced.
136 SuperCollider for the Creative Musician
FIGURE 4.3 The visual result when using a slightly slower-than-normal frame pointer for BufWr.
(
{
var sig, phs;
sig = SinOsc.ar([2000, 2010]);
phs = Line.ar(0, b.numFrames - 1, b.duration * 3.7, doneAction: 2);
BufWr.ar(sig, b, phs);
sig = sig * 0.25;
}.play;
)
Though it’s technically possible to modulate the movement of the recording pointer, doing
so is generally inadvisable, especially if accurate recording is the goal. The quality loss that
results is likely part of the reason why the internal pointer of RecordBuf can only stand still
or move at the sample rate. To create variations in the speed and pitch of recorded sound, it
is better to capture these sounds at normal speed and vary their playback by manipulating
BufRd or PlayBuf, taking advantage of their ability to interpolate between samples. Of
course, there may be creative situations in which this glitchy flavor of harmonic distortion
is appealing!
Phasor is an ideal choice for loop recording with BufWr. When the recording Synth in
Code Example 4.26 is freed, the buffer will contain the last second of audio that occurred.
C h a p t er 4: Sa m pl i n g 137
(
x = {
var sig, freq, phs;
freq = TExpRand.ar(200, 1200, Dust.ar(12) ! 2);
sig = SinOsc.ar(freq.lag(0.02));
phs = Phasor.ar(0, BufRateScale.kr(b), 0, BufFrames.kr(b));
BufWr.ar(sig, b, phs);
sig = sig * 0.25;
}.play;
)
x.free;
y.release(2);
BufRd and BufWr can simultaneously interact with the same buffer, shown in Code
Example 4.27, creating a delay effect similar to Code Example 4.22. The playback pointer
can be positioned behind the record pointer by subtracting some number of samples from the
frame pointer signal. Note that the buffer playback signal is scaled down by a factor of 0.5 be-
fore being summed with the original signal to create a more convincing “echo” effect.
(
x = {
var sig, freq, phs, delay;
freq = TExpRand.ar(200, 1200, Dust.ar(12) ! 2);
sig = SinOsc.ar(freq.lag(0.02));
phs = Phasor.ar(0, BufRateScale.kr(b), 0, BufFrames.kr(b));
BufWr.ar(sig, b, phs);
delay = BufRd.ar(2, b, phs - (SampleRate.ir / 3));
sig = sig + (delay * 0.5);
sig = sig * 0.25;
}.play;
)
x.release(2);
138 SuperCollider for the Creative Musician
Overdubbing is possible with BufWr, though not as straightforward as it is with RecordBuf, due
to a lack of explicit controls for amplitude levels of old/new buffer content. As demonstrated in
Code Example 4.28, the process involves reading the buffer content, scaling down the ampli-
tude (in this case, by a factor of 0.75), summing it with the live signal, and finally recording
the sum signal into the buffer. Line applies a short fade-in to the source signal to prevent
a click.
(
x = {
var sig, freq, phs, delay;
freq = TExpRand.ar(200, 1200, Dust.ar(12) ! 2);
sig = SinOsc.ar(freq.lag(0.02), mul: 0.25);
sig = sig * Line.kr(0, 1, 0.02);
phs = Phasor.ar(0, BufRateScale.kr(b), 0, BufFrames.kr(b));
sig = sig + (BufRd.ar(2, b, phs) * 0.75);
BufWr.ar(sig, b, phs);
sig = sig * 0.25;
}.play;
)
x.release(2);
Companion Code 4.5 explores creative options involving BufWr and BufRd.
4.5.1 GRAINBUF
The first four arguments of GrainBuf determine the number of channels of the output signal,
a trigger signal which schedules grain generation, the duration of each grain (checked when-
ever a grain is created), and the buffer containing the source signal.
Like PlayBuf and BufRd, the number of channels of GrainBuf must be a fixed pos-
itive integer and cannot be modulated. Bear in mind that this value corresponds to the
number of audio channels that GrainBuf will output, rather than the number of channels
in the source buffer, which must always be one. Impulse and Dust generate single-sample
impulses and are sensible choices for generating rhythmic or arrhythmic triggers, respec-
tively. If using the built-in file demonstrated in Code Example 4.29, the following code will
produce a stream of eight noisy grains per second, each 50 milliseconds long. Subsequent
examples in this section assume the server is booted and that a one-channel audio file has
been loaded into a buffer, stored in the variable b, and therefore do not explicitly include
these steps.
(
{
GrainBuf.ar(
numChannels: 2,
trigger: Impulse.kr(8),
dur: 0.05,
sndbuf: b
);
}.play;
)
The fifth argument (rate) is a ratio that determines playback speed of individual grains,
which behaves exactly like the rate argument of PlayBuf. GrainBuf and the other gran-
ular UGens discussed here automatically resample the audio, so that a rate value of one
produces the original sound, regardless of the sample rate of the source file. Therefore, it
should not be necessary to scale the rate using BufRateScale. The sixth argument (pos)
is a normalized position value that determines the starting frame in the buffer from which
grains are generated. Zero represents the beginning of the file, and one represents the end.
If a relatively long grain begins near the end of the buffer, GrainBuf will wrap to the be-
ginning of the buffer to complete the grain. In this case, the grain will contain a bit of the
end and beginning of the source buffer, which may or may not be desirable. If your source
audio file does not fade to zero at both ends, a click may be produced. This wrapping beha-
vior cannot be changed. The code in Code Example 4.30 selects grains that begin halfway
through the buffer and plays each grain at a speed that corresponds to an increase of seven
semitones.
C h a p t er 4: Sa m pl i n g 141
(
{
GrainBuf.ar(
numChannels: 2,
trigger: Impulse.kr(8),
dur: 0.05,
sndbuf: b,
rate: 7.midiratio,
pos: 0.5
);
}.play;
)
A unifying feature of UGens is that, with few exceptions, their arguments can be controlled
by other UGens, and granular UGens are no different. For example, the Line UGen provides
an option for moving the frame pointer through the buffer, rather than holding it at a static
position. Code Example 4.31 advances the grain pointer from start to end over the buffer
duration.
(
{
GrainBuf.ar(
numChannels: 2,
trigger: Impulse.kr(8),
dur: 0.05,
sndbuf: b,
rate: 7.midiratio,
pos: Line.kr(0, 1, BufDur.ir(b))
);
}.play;
)
The seventh argument (interp) determines the type of sample interpolation applied when the
rate argument alters grain pitch. The behavior is identical to BufRd: 4 is cubic, 2 is linear, and
1 disables interpolation. The default is linear, which is usually a good compromise. The eighth
argument (pan) determines the pan position of each grain, and it should be a number between
142 SuperCollider for the Creative Musician
negative and positive 1. When numChannels is 2, this value behaves as it does with Pan2 (0 is
centered, negative/positive values shift the image to the left/right).
The ninth argument, envbufnum, is a buffer number that determines the shape of the
amplitude envelope applied to each grain. In general, if no amplitude envelope is applied to
a grain, the grain may produce a hard click at its beginning and end. With a high grain den-
sity, this can drastically affect the spectrum and timbre of the overall granular texture. The
default value is negative one, which uses a built-in envelope shaped like a bell curve, providing
a smooth fade-in and fade-out for each grain. An alternate envelope can be created by calling
discretize on an Env and loading the resulting values into a buffer. The duration of the
Env in this case is largely irrelevant, because it will be rescaled based on the duration of each
grain. A relatively high discretization value is sensible to ensure a sufficiently large buffer and
therefore a sufficiently precise representation of the envelope. Code Example 4.32 establishes
a percussive grain envelope, producing a clear, distinct articulation at the start of each grain.
In addition, a noise generator randomizes the pan position of each grain.
(
var env = Env.new([0, 1, 0], [0.01, 1], [0, -4]);
~grainenv = Buffer.loadCollection(s, env.discretize(8192));
)
(
{
GrainBuf.ar(
numChannels: 2,
trigger: Impulse.kr(8),
dur: 0.05,
sndbuf: b,
rate: 7.midiratio,
pos: Line.kr(0, 1, BufDur.ir(b)),
interp: 2,
pan: LFNoise1.kr(10),
envbufnum: ~grainenv
);
}.play;
)
4.5.2 TGRAINS
Overall, TGrains and GrainBuf are similar, with the following exceptions:
Code Example 4.33 uses TGrains to create roughly the same type of sound produced by the
previous GrainBuf example.
(
{
TGrains.ar(
numChannels: 2,
trigger: Impulse.kr(8),
bufnum: b,
rate: 7.midiratio,
centerPos: Line.kr(0, BufDur.ir(b), BufDur.ir(b)),
dur: 0.05,
pan: LFNoise1.kr(10),
amp: 1,
interp: 2
);
}.play;
)
4.5.3 WARP1
Warp1 retains much of the same functionality as GrainBuf and TGrains, but its design is
slightly different, as is the naming and order of its arguments. In a break with consistency, the
numChannels argument of Warp1 corresponds to the number of channels in the source buffer,
rather than the desired number of output channels. Thus, if a one-channel buffer is supplied
for bufnum, numChannels argument should be 1. Warp1 has no pan argument, so the output
signal will always be the same channel size as the source buffer.
144 SuperCollider for the Creative Musician
The major design distinction of Warp1 is that there is no explicit trigger signal that governs
the timing of grain generation. Instead, the timing of each grain is a function of the grain
duration and an overlaps parameter, which is the ratio between the duration of the grain
and the duration between adjacent grain onsets. For example, if overlaps is 3, the next
grain will generate when the previous grain is one-third complete. If overlaps is 0.5, there
will be a gap of silence between consecutive grains equal to the duration of one grain. Lastly,
windowRandRatio is a value between 0 and 1 that randomizes grain duration, which is useful
in offsetting the otherwise rhythmic behavior of grain generation, and can help produce a
more pleasantly irregular sound. Code Example 4.34 uses Warp1 to approximate the results of
GrainBuf in Code Example 4.32. As a side note, the implicit frequency at which grains occur
can be calculated by dividing overlaps by windowSize, which in this case is eight grains per
second (0.4 ÷ 0.05).
(
{
var sig;
sig = Warp1.ar(
numChannels: 1,
bufnum: b,
pointer: Line.kr(0, 1, BufDur.ir(b)),
freqScale: 7.midiratio,
windowSize: 0.05,
envbufnum: -1,
overlaps: 0.4,
windowRandRatio: 0,
interp: 2,
mul: 0.5,
);
sig = Pan2.ar(sig, LFNoise1.kr(10));
}.play;
)
Granular synthesis is fertile soil for all sorts of creative experiments, some of which are explored
in Companion Code 4.6.
C h a p t er 4: Sa m pl i n g 145
Notes
1 When using iteration to read audio files into buffers, as pictured in Code Example 4.6, the files will
be read in numerical/alphabetical order. If you want an array to be filled with buffers in a specific
order, a good practice is to attach numerical prefixes to file names before reading them into buffers.
For example, instead of snare.wav, kick.wav, hihat.wav, you may rename these files 00_snare.wav,
01_kick.wav, 02_hihat.wav to ensure they are loaded in this numerical order.
2 For more information on granular synthesis and microsound, see Curtis Roads, Microsound
(Cambridge, Mass: MIT Press, 2001).
CHAPTER 5
SEQUENCING
5.1 Overview
Music is fundamentally rooted in time. Thus, music sequencing tools are an essential part of
any creative audio platform. Indulging in some simplification, a musical composition can be
viewed as a sequence of sections, a section as a sequence of phrases, and a phrase as a sequence
of notes. Thinking in this modular way, that is, conceptualizing a project as smaller sequential
units that can be freely combined, is an excellent way to approach large-scale projects in SC,
and in programming languages more generally.
SC provides a wealth of sequencing options. The Pattern library, for example, is home to
hundreds of classes that define many types of sequences, which can be nested and combined
to form complex, composite structures. The Stream class is also a focal point, which provides
sequencing infrastructure through its subclasses, notably Routine and EventStreamPlayer.
Clock classes provide an implicit musical grid on which events can be scheduled, and play a
central role in sequencing as well.
It’s important to make a distinction between processes that define sequences, and
processes that perform sequences. As an analogy, consider the difference between a notated
musical score and a live musical performance of that score. The score provides detailed per-
formance instructions, and the sound of the music can even be imagined by studying it.
However, the score is not the same thing as a performance. One score can spawn an infinite
number of performances, which may be slightly or significantly different from each other. In
SC, a pattern or function can be used to define a sequence, while some type of stream is used
to perform it.
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.003.0005
C h a p t er 5: S eq u en c i n g 147
s.boot;
(
~eventA = {SinOsc.ar(60.midicps ! 2) * Line.kr(0.1, 0, 1, 2)};
~eventB = {SinOsc.ar(70.midicps ! 2) * Line.kr(0.1, 0, 1, 2)};
~eventC = {SinOsc.ar(75.midicps ! 2) * Line.kr(0.1, 0, 1, 2)};
f = {
~eventA.play;
~eventB.play;
~eventC.play;
};
)
f.();
How would we play these tones one-by-one, to create a melody? The Routine class, introduced
in Code Example 5.2, provides one option for timed sequences. A routine is a special type of
state-aware function, capable of pausing and resuming mid-execution. A routine encapsulates a
function, and within this function, either the yield or wait method designates a pause (yield
is used throughout this section, but these methods are synonymous when applied to a number).
Once a routine is created, we can manually step through it by calling next on the routine. On
each next, the routine begins evaluation, suspends when it encounters a pause, and continues
from that point when another next is received. If a routine has reached its end, next returns
nil, but a routine can be reset at any time, which effectively “rewinds” it to the beginning.
(
~eventA = {SinOsc.ar(60.midicps ! 2) * Line.kr(0.1, 0, 1, 2)};
~eventB = {SinOsc.ar(70.midicps ! 2) * Line.kr(0.1, 0, 1, 2)};
~eventC = {SinOsc.ar(75.midicps ! 2) * Line.kr(0.1, 0, 1, 2)};
f = {
~eventA.play;
1.yield;
~eventB.play;
1.yield;
~eventC.play;
1.yield;
};
148 SuperCollider for the Creative Musician
r = Routine(f);
)
Note that yield must be called from within a routine, so the function f in Code Example 5.2
cannot be evaluated by itself:
Routine({"hello".postln; 1.yield;});
{"hello".postln; 1.yield;}.r;
r({"hello".postln; 1.yield;});
When stepping through a routine, each yield returns its receiver, in the same way that calling
value on a function returns its last expression. Thus, the returned value from each next call is
equal to each yielded item (this is also why each next causes the number one to appear in the
post window). In Code Example 5.2, the thing we yield is irrelevant, and we have no interest in
it. It could be any object; the number one is chosen arbitrarily. In this example, we are more in-
terested in the actions that occur between yields, specifically, the production of sound. However,
the ability of a routine to return values illustrates another usage. Suppose we want a process that
generates MIDI note numbers that start at 48 and increment by a random number of semitones
between one and four, until the total range exceeds three octaves. We can do so by defining an
appropriate function and yielding values of interest. A while loop is useful, as it allows us to
repeatedly apply an increment until our end condition is met (see Code Example 5.3).
(
~noteFunc = {
var num = 48;
while({num < 84}, {
C h a p t er 5: S eq u en c i n g 149
num.yield;
num = num + rrand(1, 4);
});
};
~noteGen = Routine(~noteFunc);
)
Note that this sequence is not predetermined, because each next performs a new application
of the rrand function. Thus, if ~noteGen is reset, it will generate a new random sequence,
likely different than the previous.
We often want a routine to advance on its own. A routine will execute automatically in
response to play, shown in Code Example 5.4. In this case, yield values are treated as pause
durations (measured in seconds, under default conditions). Iteration, demonstrated in Code
Example 5.5, is a useful tool for creating timed repetitions.
(
~eventA = {SinOsc.ar(60.midicps ! 2) * Line.kr(0.1, 0, 1, 2)};
~eventB = {SinOsc.ar(70.midicps ! 2) * Line.kr(0.1, 0, 1, 2)};
~eventC = {SinOsc.ar(75.midicps ! 2) * Line.kr(0.1, 0, 1, 2)};
f = {
~eventA.play;
1.yield;
~eventB.play;
1.yield;
~eventC.play;
1.yield;
};
r = Routine(f);
)
r.play;
150 SuperCollider for the Creative Musician
(
~playTone = { |freq|
{SinOsc.ar(freq ! 2) * Line.kr(0.1, 0, 1, 2)}.play;
};
f = {
3.do({
~playTone.(72.midicps);
0.2.yield;
~playTone.(62.midicps);
0.4.yield;
});
};
r = Routine(f);
)
r.play;
When using iteration in a routine, number.do can be replaced with loop or inf.do (a special
keyword that represents infinity) to repeat a block of code indefinitely, as pictured in Code
Example 5.6. A playing routine can be stopped at any time using stop —particularly impor-
tant to keep in mind if a routine has no end! Stopping all routines is also one of the side-effects
of pressing [cmd]+[period]. Once a routine is stopped, it cannot be resumed with play or next
unless it is first reset.
(
~playTone = { |freq|
{SinOsc.ar(freq ! 2) * Line.kr(0.1, 0, 0.2, 2)}.play;
};
r = Routine({
loop({
~playTone.(72.midicps);
0.4.yield;
[62, 63, 64].do({ |n|
~playTone.(n.midicps);
(0.4 / 3).yield;
});
});
C h a p t er 5: S eq u en c i n g 151
});
)
r.play;
r.stop;
Routines can be nested inside other routines, which is a valuable asset when building modular
musical structures. Here, an important distinction arises, related to whether multiple routines
will play simultaneously (in parallel) or sequentially (in series). Code Example 5.7 begins with
two functions (~sub0 and ~sub1), which define an “ABAB” pattern and “CCC” pattern.
These subroutines are nested inside two parent routines, ~r_parallel and ~r_series.
When a routine is played, the resulting process exists in its own temporal space, called
a thread, which is independent from the parent thread in which it was created. Thus, when
multiple routines are played using back-to-back code statements (as they are in ~r_parallel),
each exists independently, unaware of the others’ existences. However, when serial behavior
is desired, embedInStream can be used instead of play, which situates the subroutine in the
parent thread. In this case, the parent routine and the two subroutines are all part of the same
thread and exist along one temporal continuum. Thus, each subroutine begins only when the
previous subroutine has finished.
(
~playTone = { |freq|
{SinOsc.ar(freq ! 2) * Line.kr(0.1, 0, 0.2, 2)}.play;
};
~sub0 = {
2.do({
~playTone.(67.midicps);
0.15.yield;
152 SuperCollider for the Creative Musician
~playTone.(69.midicps);
0.15.yield;
});
0.5.yield;
};
~sub1 = {
3.do({
~playTone.(75.midicps);
0.5.yield;
});
1.yield;
};
~r_parallel = Routine({
Routine(~sub0).play;
Routine(~sub1).play;
});
~r_series = Routine({
Routine(~sub0).embedInStream;
Routine(~sub1).embedInStream;
});
)
Modular thinking is important and valuable. Before setting out to build some glorious
routines, take a moment to conceptualize the musical structure they’ll represent. Break down
your structures into the simpler units, build subroutines (perhaps sub-subroutines), and com-
bine them appropriately. A large, unwieldy, irreducible routine is usually harder to debug than
a routine built from modular parts.
5.2.2 TEMPOCLOCK
In the previous section, yield values are interpreted as durations, measured in seconds. Unless
you’re working at a tempo of 60 (or perhaps 120) beats per minute, specifying durations in
seconds is inconvenient, and requires extra math. To determine the duration of an eighth note
at 132 bpm, for example, we first divide 60 by the tempo to produce a value that represents
seconds per beat. If a quarter note is considered one beat, we then divide by two to get the
duration of an eighth note:
Manual calculation of durations based on tempo is cluttered and does not do a good job of
visually conveying metric structure. TempoClock, one of three clock objects (its siblings are
C h a p t er 5: S eq u en c i n g 153
SystemClock and AppClock), provides an elegant solution. These three clocks handle the
general task of scheduling things at specific times. AppClock is the least accurate of the three,
but it can schedule certain types of actions that its siblings cannot, notably, interacting with
graphical user interfaces. SystemClock and TempoClock are more accurate, but TempoClock
has the additional benefit of being able to express time in terms of beats at a particular tempo.
When the interpreter is launched or rebooted, a default instance of TempoClock is automat-
ically created:
TempoClock.default;
The default TempoClock runs at 60 bpm, and the current beat can be retrieved via the beats
method:
A routine can be scheduled on a specific clock by providing that clock as the first argument for
play. In Code Example 5.8, yield times represent beat values, relative to the clock on which the
routine plays. The tempo of a TempoClock can be changed at any time using the tempo method,
also depicted in this same example. Note that tempo only affects durations between onsets and
has no effect on the durations of the notes themselves (which in this case are determined by Line).
(
t = TempoClock(132/60);
~playTone = { |freq|
{SinOsc.ar(freq ! 2) * Line.kr(0.1, 0, 1, 2)}.play;
};
r = Routine({
[60, 70, 75].do({ |n|
154 SuperCollider for the Creative Musician
~playTone.(n.midicps);
(1/2).yield;
});
});
)
r.play(t);
t.tempo = 112/60;
We often want to synchronize several timed processes, so that they begin together and/or
exist in rhythmic alignment. Manual attempts to synchronize a routine with one that’s al-
ready playing will rarely succeed. Even if both routines are played on the same clock, their
performances are not guaranteed to align with each other. By default, when a routine is played
on a clock, it begins immediately. To schedule a routine to begin on a certain beat, we can
specify quant information along with the clock, shown in Code Example 5.9.
(
t = TempoClock(132/60);
~playTone = { |freq|
{SinOsc.ar(freq ! 2) * Line.kr(0.1, 0, 0.2, 2)}.play;
};
~r0 = Routine({
loop({
[60, 63, 65, 67].do({ |n|
~playTone.(n.midicps);
(1/2).yield;
});
});
});
~r1 = Routine({
loop({
[70, 72, 75, 77].do({ |n|
~playTone.(n.midicps);
(1/2).yield;
C h a p t er 5: S eq u en c i n g 155
});
});
});
)
In some cases, you may want a rhythmic process to be quantized to a particular beat but
shifted in time to begin some number of beats after or before that beat. In this case, an array of
two values can be provided for quant, seen in Code Example 5.10. The first value represents
a beat multiple, and the second value represents a number of beats to shift along the timeline.
Negative values result in earlier scheduling.
t.clear;
t = TempoClock(132/60).permanent_(true);
// press [cmd]+[period]...
t.permanent = false;
// press [cmd]+[period]...
These chapter sections aim to convey the essentials of using routines and TempoClock to
express and perform musical sequences. Though the sounds are simple, the actions in a rou-
tine can be replaced with other valid statements, such creating new Synths or sending set
messages to existing Synths. Companion Code 5.1 explores further ideas involving these two
classes, which may help expand your understanding.
5.3 Patterns
Patterns, which exist as a family of classes that all begin with capital P, provide flexible,
concise tools for expressing musical sequences. A pattern defines a sequence but does not
perform that sequence. To retrieve a pattern’s output, we can use asStream to return a
routine, which can then be evaluated with next. In contrast to creating such routines
ourselves, patterns are simpler, pre-packaged units with known behaviors, and tend to
save time.
Patterns are, in a sense, a language of their own within the SC language, and it takes time
to get familiar. At first, it can be difficult to find the right pattern (or combination of patterns)
to express an idea. But, when learning a foreign language, you don’t need to memorize the dic-
tionary to become fluent. You only need to learn a small handful of important words, enough
to form a few coherent, useful sentences, and the rest will follow with practice and exposure.
Patterns are among the most thoroughly documented classes, and there are multiple tutorials
built into the help browser.1
(
~pat = Pseries(start: 50, step: 7, length: 6);
~seq = ~pat.asStream;
)
(
~pat = {
var num = 50, inc = 7, count = 0;
while({count < 6}, {
num.yield;
num = num + inc;
count = count + 1;
});
};
~seq = Routine(~pat);
)
Patterns are described as “stateless.” They represent a sequence but are distinct from and com-
pletely unaware of its actualization. To emphasize, observe the result when trying to extract
values directly from a pattern:
nextN returns an array of values from a sequence, and all returns an array containing all of
them. To demonstrate, Code Example 5.14 also introduces Pwhite, which defines a sequence
of random values selected from a range with a uniform distribution. Like rrand, Pwhite
defines a sequence of integers if its boundaries are integers, and floats if either boundary is
a float.
158 SuperCollider for the Creative Musician
(
~pat = Pwhite(lo: 48, hi: 72, length: 10);
~seq = ~pat.asStream;
)
~seq.reset;
~seq.all;
Infinite-length sequences are possible, shown in Code Example 5.15, created by specifying
inf for the pattern length or number of repeats. Here, we introduce Prand, which randomly
selects an item from an array.
(
~pat = Prand(list: [4, 12, 17], repeats: inf);
~seq = ~pat.asStream;
)
TIP.RAND(); C
ALLING “ALL” ON AN INFINITE-
LENGTH STREAM
Like playing an infinite-length routine with no yields, calling all on an infinite-
length stream will crash the program!
Table 5.1 provides a list of commonly encountered patterns used to generate numerical
sequences. Note that some patterns, like Pseries and Pwhite, can only define sequences with
numerical output, while others, like Pseq and Prand, output items from an array and can
therefore define sequences that output any kind of data.
C h a p t er 5: S eq u en c i n g 159
TABLE 5.1 A list of commonly encountered patterns for generating numerical sequences.
Pattern Description
Mathematical operations and methods that apply to numbers can also be applied to
patterns that specify numerical sequences. Imagine creating a sequence that outputs random
values between ±1.0, but also wanting that sequence to alternate between positive and neg-
ative values. One solution involves multiplying one pattern by another, demonstrated in
Code Example 5.16. The result is a composite pattern that defines a sequence in which
corresponding pairs of output values are multiplied. Code Example 5.17 offers another so-
lution, involving nesting patterns inside of another. When an array-based pattern (such as
Pseq or Prand) encounters another pattern as part of its output, it embeds the entire output
of that pattern before moving on to its next item (similar to the embedInStream method for
routines).
(
~pat = Pwhite(0.0, 1.0, inf) * Pseq([-1, 1], inf);
~seq = ~pat.asStream.nextN(8);
)
160 SuperCollider for the Creative Musician
(
~pat = Pseq([
Pwhite(-1.0, 0.0, 1),
Pwhite(0.0, 1.0, 1)
], inf);
~seq = ~pat.asStream.nextN(10);
)
If we acquire three fish of a new breed, we can update the Event by adding a new key-value
association:
a[\clownfish] = 3;
If a sixth guppy appears, we can update the value at that key. Because each key in an Event is
unique, this expression overwrites the previous key, rather than creating a second identical key:
a[\guppy] = 6;
To retrieve the number of goldfish, we access the item stored at the appropriate key:
a[\goldfish]; // -> 8
An Event is not just a storage device; it is commonly used to model an action taken in response
to a play message, in which its key-value pairs represent parameters and values necessary to
that action. Often, this action is the creation of a new Synth, but Events model other actions,
like musical rests, set messages, or sending MIDI data. There are numerous pre-made Event
types that represent useful actions, each pre-filled with sensible default values. A complete list
of built-in Event types can be retrieved by evaluating:
Event.eventTypes.keys.postcs;\
C h a p t er 5: S eq u en c i n g 161
(
~func = { |input|
input = input + 2;
};
)
Each Event type represents a different type of action and thus expects a unique set of keys.
Most Event types are rarely used and largely irrelevant for creative applications. The most
common, by far, is the note Event, which models the creation of a Synth. This is the default
Event type if unspecified. The components and behaviors of the note Event are given default
values that are so comprehensive, that even playing an empty Event generates a Synth and
produces a sound, if the server is booted:
().play;
On evaluation, the post window displays the resulting Event, which looks roughly like this:
The default note Event includes a freq key with a value of approximately 261.6 (middle C),
an amp key with a value of 0.1, and a few other items that are relatively unimportant at the
moment. We can override these default values by providing our own. For example, we can
specify a higher and slightly louder pitch:
Where does this sound come from? The instrument key specifies the SynthDef to be used,
and there is a \default SynthDef, automatically added when the server boots. This default
SynthDef is primarily used to support code examples in pattern help files, and can be found in
the Event source code within the makeDefaultSynthDef method. When a SynthDef name is
provided for an Event’s instrument key, that SynthDef’s arguments also become meaningful
Event keys. Event keys that don’t match SynthDef arguments and aren’t part of the Event def-
inition will have no effect. For example, the default SynthDef has a pan position argument,
but no envelope parameters. Thus, in the following line, pan will shift the sound toward the
left, but atk does nothing:
The default SynthDef is rarely used for anything beyond simple demonstrations. Code
Example 5.18 uses an Event to create a Synth using a SynthDef adapted from Companion
Code 3.9.
(
SynthDef(\bpf_brown, {
arg atk = 0.02, rel = 2, freq = 800,
rq = 0.005, pan = 0, amp = 1, out = 0;
var sig, env;
env = Env([0, 1, 0], [atk, rel], [1, -2]).kr(2);
sig = BrownNoise.ar(0.8);
sig = BPF.ar(sig, freq, rq, 1 / rq.sqrt);
sig = Pan2.ar(sig, pan, amp) * env;
Out.ar(out, sig);
}).add;
)
A few additional details about internal Event mechanisms are worth discussing. Throughout
earlier chapters, the argument names freq and amp were regularly used to represent the
C h a p t er 5: S eq u en c i n g 163
frequency and amplitude of a sound. We can technically name these parameters whatever we
like, but these specific names are chosen to take advantage of a flexible system of pitch and
volume specification built into the Event paradigm. In addition to specifying amplitude via
amp, we can also specify amplitude as a value in decibels, using the db key:
How is this possible? Despite the fact that db is not one of our SynthDef arguments, the Event
knows how to convert and apply this value correctly. We can understand this behavior more
clearly by examining some internals of the Event paradigm:
In the absence of user-provided values, the default db value of -20.0 is converted to a normalized
amplitude of 0.1. If a db value is provided, that value is converted to an amplitude. If an amp
value is directly provided, it “intercepts” the conversion process, temporarily overwriting the
function that performs the conversion, such that the amp value is directly provided to the
Synth. It’s important to remember that this flexibility is only available if the SynthDef includes
an argument named “amp” that is used in the conventional sense (as a scalar that controls
output level). If, for example, our SynthDef used a variable named “vol” for amplitude control,
then neither amp nor db would have any effect on the sound if provided in the Event, akin to
misspelling a SynthDef argument name when creating a Synth. In this case, our only option
for level control would be “vol,” and we would not have access to this two-tier specification
structure. To specify level as a decibel value, we would have to perform the conversion our-
selves, for example:
(vol: -20.dbamp).play;
The situation with freq is similar, but with more options. If “freq” is declared as a
SynthDef argument that controls pitch, then four layers of pitch specification become avail-
able. These four options are somewhat intertwined, but are roughly expressible as follows:
1. “degree,” along with “scale” and “mtranspose,” allows modal expression of pitch as a
scale degree, with the possibility of modal transposition.
2. “note,” along with “root,” “octave,” “gtranspose,” “stepsPerOctave,” and “octaveRatio,”
allows specification of pitch as a scale degree within an equal-tempered framework,
with a customizable octave ratio and arbitrary number of divisions per octave.
3. “midinote,” along with “ctranspose” and “harmonic,” allows specification of pitch as
MIDI note numbers (non-integers are allowed), with the option for chromatic transpo-
sition and specification of a harmonic partial above a fundamental.
4. “freq,” along with “detune,” allows specification of a frequency value measured in
Hertz, with an optional offset amount added to this value.
164 SuperCollider for the Creative Musician
A graphic at the bottom of the Event help file depicts this pitch hierarchy, and a slightly
modified version of this graphic appears in Figure 5.1. The final internally calculated value,
detunedFreq, is ultimately passed as the freq value for the created Synth. Code Example 5.19
provides examples of pitch specification using these four tiers.
FIGURE 5.1 A visualization of the pitch specification hierarchy in the Event paradigm.
(degree: 0).play;
(degree: 1).play; // modal transposition by scale degree
(note: 0).play;
(note: 2).play; // chromatic transposition by semitones
(midinote: 60).play;
(midinote: 62).play; // MIDI note numbers
(freq: 261.626).play;
(freq: 293.665).play; // Hertz
C h a p t er 5: S eq u en c i n g 165
TIP.RAND(); F
LATS AND SHARPS WITH SCALE
DEGREES
The degree key has additional flexibility for specifying pitch. Altering a degree value
by ±0.1 produces a transposition by one semitone, akin to notating a sharp or flat
symbol on a musical score:
(degree: 0).play;
(degree: 0.1).play; // sharp
Similarly, “s” and “b” can be appended to an integer, which has the same result:
(degree: 0).play;
(degree: 0b).play; // flat
Again, these pitch options are only available if an argument named “freq” is declared in the
SynthDef and used conventionally. In doing so, pitch information is specifiable at any of these
four tiers, and calculations propagate through these tiers from “degree” to “detunedFreq.” The
functions that perform these calculations can also be examined:
Event.parentEvents.default[\degree]; // default = 0
Event.parentEvents.default[\note].postcs;
Event.parentEvents.default[\midinote].postcs;
Event.parentEvents.default[\freq].postcs;
Event.parentEvents.default[\detunedFreq].postcs;
(
SynthDef(\bpf_brown, {
arg atk = 0.02, rel = 2, gate = 1, freq = 800,
rq = 0.005, pan = 0, amp = 1, out = 0;
var sig, env;
env = Env.asr(atk, 1, rel).kr(2, gate);
sig = BrownNoise.ar(0.8);
sig = BPF.ar(sig, freq, rq, 1 / rq.sqrt);
sig = Pan2.ar(sig, pan, amp) * env;
Out.ar(out, sig);
}).add;
)
166 SuperCollider for the Creative Musician
The default note Event includes a sustain key, which represents a duration after which a
(\gate, 0) message is automatically sent to the Synth. The sustain value is measured in
beats, the duration of which is determined by the clock on which the Event is scheduled (if no
clock is specified, the default TempoClock is used, which runs at 60 bpm). This mechanism
assumes the SynthDef has an argument named “gate,” used to release some sort of envelope
with a terminating doneAction. If this argument has a different name or is used for another
purpose, the Synth will become stuck in an “on” state, likely requiring [cmd]+[period]. The
backend of the Event paradigm is complex, and perplexing situations (e.g., stuck notes) may
arise from time to time. The Companion Code that follows the next section attempts to de-
mystify common pitfalls. Although the Event paradigm runs deeper than discussed here, this
general introduction should be enough to help you get started with Event sequencing using
the primary Event pattern, Pbind. For readers seeking more detailed information on Event
features and behaviors, the Event help file provides additional information, particularly in
a section titled “Useful keys for notes.” Relatedly, the code in Code Example 5.21 can be
evaluated to print a list of keys that are built into the Event paradigm.
(
Event.partialEvents.keys.do({ |n|
n.postln;
Event.partialEvents[n].keys.postln;
\.postln;
});\
)
section 5.3.1 is that here, we must provide an empty starting Event to be populated (hence the
additional set of parentheses inside of next).
(
p = Pbind(
\midinote, Pseq([55, 57, 60], 2),
\db, Pwhite(-20.0, -10.0, 6),
\pan, Prand([-0.5, 0, 0.5], 6)
);
)
~seq = p.asStream;
In Code Example 5.22, each internal value pattern specifies exactly six values, so the stream
produces six Events. Because the Pbind does not specify otherwise, the created Events are
note-type Events that use the default SynthDef. The “midinote” and “db” keys undergo in-
ternal calculations to yield “freq” and “amp” values, which are supplied to each Synth.
Manually extracting and playing Events one-by-one is rarely done, intended here as only an
illustrative example. More commonly, we play a Pbind, which returns an EventStreamPlayer,
a type of stream that performs an Event sequence. As demonstrated in Code Example 5.23,
an EventStreamPlayer works by automating the Event extraction process, and scheduling the
Events to be played on a clock, using the default TempoClock if unspecified, thus generating
a timed musical sequence.
(
p = Pbind(
\midinote, Pseq([55, 57, 60], 2),
\db, Pwhite(-20.0, -10.0, 6),
\pan, Prand([-0.5, 0, 0.5], 6)
);
~seq = p.play;
)
168 SuperCollider for the Creative Musician
(
p = Pbind(
\midinote, Pseq([55, 57, 60], 2).trace,
);
~seq = p.play;
)
The onset of each Synth occurs one second after the preceding onset. But where does this
timing information originate? Timing information is typically provided using the dur key,
which specifies a duration in beats and has a default value of one:
We can provide our own timing information, which may be a value pattern, as shown in Code
Example 5.24.
(
p = Pbind(
\dur, Pseq([0.75, 0.25, 0.75, 0.25, 0.5, 0.5], 1),
\midinote, Pseq([55, 57, 60], 2),
\db, Pwhite(-20.0, -10.0, 6),
\pan, Prand([-0.5, 0, 0.5], 6)
);
~seq = p.play;
)
Pbind can specify an infinite-length Event stream. If each internal value pattern is infinite,
the Event stream will also be infinite. If at least one internal value pattern is finite, the length
of the Event stream will be equal to the length of the shortest value pattern. If an ordi-
nary number is provided for one of Pbind’s keys, it will be interpreted as an infinite-length
value stream that repeatedly outputs that number. An EventStreamPlayer can be stopped
at any time with stop. Unlike routines, a stopped EventStreamPlayer can be resumed with
C h a p t er 5: S eq u en c i n g 169
resume, causing it to continue from where it left off. Code Example 5.25 demonstrates these
concepts: the midinote pattern is the only finite value pattern, which produces 24 values and
thus determines the length of the Event stream. The db value −15 is interpreted as an infinite
stream that repeatedly yields −15.
(
p = Pbind(
\dur, Pseq([0.75, 0.25, 0.75, 0.25, 0.5, 0.5], inf),
\midinote, Pseq([55, 57, 60], 8),
\db, -15
);
~seq = p.play;
)
~seq.stop;
~seq.resume;
In a Pbind, only values or value patterns should be paired with Event keys. It may be tempting
to use a method like rrand to generate random values for a key. However, this will result in a
random value being generated once when the Pbind is created and used for every Event in the
output stream, demonstrated in Code Example 5.26. Instead, the correct approach is to use
the pattern or pattern combination that represents the desired behavior (in this case, Pwhite).
(
p = Pbind(
\dur, 0.2,
\midinote, rrand(50, 90), // <- should use Pwhite(50, 90) instead
);
~seq = p.play;
)
(
t = TempoClock(90/60);
p = Pbind(
\dur, 0.25,
\midinote, Pwhite(48, 60, inf),
);
q = Pbind(
\dur, 0.25,
\midinote, Pwhite(72, 84, inf),
);
)
Combined with a general understanding of Events, patterns, and streams, Pbind unlocks
an endless supply of ideas for musical sequences. Companion Code 5.2 discusses additional
details and includes a collection of more elaborate demonstrations.
To employ rests in an Event sequence, we can embed instances of the Rest class in a Pbind’s
dur pattern. Each rest is given a duration, measured in beats, pictured in Code Example 5.29.
C h a p t er 5: S eq u en c i n g 171
t = TempoClock.new(112/60);
(
Pbind(
\dur, Pseq([1/2, 1/2, 1/2, 1/4, 1/2, 1/2, 1/2], 1),
\sustain, 0.1,
\degree, Pseq([4, 5, 7, 4, 5, 7, 8], 1),
).play(t);
)
t = TempoClock.new(112/60);
(
Pbind(
\dur, Pseq([
Pseq([Rest(1/4), 1/4], 4), // bar 1
Pseq([1/4, Rest(1/4), 1/4, Rest(1/4), 1/4, Rest(3/4)]) // bar 2
], 1),
\sustain, 0.1,
\degree, Pseq([
0, 4, 0, 5, 0, 7, 0, 4, // bar 1
5, 0, 7, 0, 8, 0, 0, 0 // bar 2
], 1),
).play(t);
)
172 SuperCollider for the Creative Musician
When the code in Code Example 5.29 is quantized, the true downbeat of the phrase occurs
on the target beat. Note that the pitch values nicely resemble the metric layout of the notated
pitches. The pitch values that align with rest Events are ultimately meaningless since they do
not sound. Zeroes are used for visual clarity, but any value is fine.
Other options exist for expressing a sequential mix of notes and rests. Rest Events can be
generated by using a pattern to determine type values. This approach lets us use a constant
dur value equal to the smallest beat subdivision necessary to express the rhythm, and the
type pattern handles the determination of Event type (see Code Example 5.30). Another op-
tion involves supplying symbols for a pitch-related pattern (such as degree, note, midinote,
or freq). If the pitch value of a note Event is a symbol, the Event becomes a rest Event.
The simplest option is to use the empty symbol, expressed as a single backslash (see Code
Example 5.31).
t = TempoClock.new(112/60);
(
Pbind(
\type, Pseq([
Pseq([\rest, \note], 4), // bar 1
Pseq([\note, \rest], 2), \note, Pseq([\rest], 3) // bar 2
], 1),
\dur, 1/4,
\sustain, 0.1,
\degree, Pseq([
0, 4, 0, 5, 0, 7, 0, 4, // bar 1
5, 0, 7, 0, 8, 0, 0, 0 // bar 2
], 1),
).play(t);
)
t = TempoClock.new(112/60);
(
Pbind(
\dur, 1/4,
C h a p t er 5: S eq u en c i n g 173
\sustain, 0.1,
\degree, Pseq([
\, 4, \, 5, \, 7, \, 4, // bar 1
5, \, 7, \, 8, \, \, \ // bar 2
], 1),
).play(t);
)
Though it’s possible to achieve a similar result by strategically providing zeroes in an ampli-
tude pattern, this is less efficient. In this case, every Event will produce a Synth, and every
Synth consumes some processing power, regardless of its amplitude.
(
p = Pbind(
\dur, 1/8,
\sustain, 0.02,
\freq, Pexprand(200, 4000, inf),
);
q = Pfin(16, p);
(
p = Pbind(
174 SuperCollider for the Creative Musician
\dur, 1/8,
\sustain, 0.02,
\freq, Pexprand(200, 4000, inf),
);
q = Pfindur(3, p);
Pfin and Pfindur are helpful in allowing us to focus on the finer musical details of a pattern
during the compositional process, working in an infinite mindset and not concerning our-
selves with total duration. When the sound is just right, we can simply wrap the pattern in one
of these limiters to constrain its lifespan as needed.
(
~p0 = Pbind(
\dur, 1/6,
\degree, Pseq([0, 2, 3, 5], 1),
\sustain, 0.02,
);
~p1 = Pbind(
\dur, 1/6,
\degree, Pseq([2, 4, 5, 7], 1),
\sustain, 0.02,
);
)
Suppose we wanted to play these phrases twice in sequence, for a total of four phrases. Pseq
provides an elegant solution, pictured in Code Example 5.35, which demonstrates its ability
to sequence Events, and not just numerical values. A composite pattern need not be deter-
ministic; any value pattern that retrieves items from an array can be used for this purpose. If
Prand([~p0, ~p1], 4) is substituted for Pseq, for example, the Event stream will play four
phrases, randomly chosen from these two.
C h a p t er 5: S eq u en c i n g 175
(
~p_seq = Pseq([~p0, ~p1], 2);
~player = ~p_seq.play;
)
To play these two phrases, we could simply enclose ~p0.play and ~p1.play in the same block
and evaluate them with one keystroke, but this approach only sounds the patterns together—it
does not express these two patterns as a singular unit. Ppar (see Code Example 5.36) is a better
option, which takes an array of several Event patterns, and returns one Event pattern in which
the individual patterns are essentially superimposed into a single sequence. Like Pseq, Ppar also
accepts a repeats value. Ptpar is similar (see Code Example 5.37) but allows us to specify a
timing offset for each individual pattern used to compose the parallel composite pattern.
(
~p_par = Ppar([~p0, ~p1], 3);
~seq = ~p_par.play;
)
(
p = Ptpar([
0, Pseq([~p1], 3),
1/12, Pseq([~p0], 3)
176 SuperCollider for the Creative Musician
], 1);
~seq = p.play;
)
As you may be able to guess from the previous examples, the composite patterns returned
by Pseq, Ppar, and Ptpar retain the ability to be further sequenced into even larger com-
posite patterns. In some cases, the final performance structure of a composition involves a
parent Pseq, which contains Pseq/Ppar patterns that represent large-scale sections, which
contain Pseq/Ppar patterns that represent sub-sections, and so forth, all the way down
to note-level details. Companion Code 5.3 focuses on an example of how such a project
might take shape. The central idea is that patterns should be treated as modular building
blocks, which can be freely combined and may represent all sequential aspects of a com-
position. Conceptualizing time-based structures within the pattern framework provides
great flexibility, allowing complex sequential ideas to be expressed and performed rela-
tively easily.
5.5.1 PDEFN
Pdefn serves as a proxy for a single value pattern, deployed by wrapping it around the desired
pattern. In Code Example 5.39, it serves as a placeholder for the degree pattern. While the
EventStreamPlayer ~seq is playing, the pitch information can be dynamically changed by
overwriting the proxy data with a new value pattern.
(
p = Pbind(
\dur, 0.2,
\sustain, 0.02,
\degree, Pdefn(\deg0, Pwhite(-4, 8, inf)),
);
~seq = p.play;
)
Note that if an infinite value pattern is replaced with a finite one, the EventStreamPlayer
to which that pattern belongs also becomes finite. One practical application of this feature,
pictured in Code Example 5.40, is to fade out an EventStreamPlayer by replacing its amplitude
pattern with one that gradually decreases to silence or near-silence over a finite period of time.
(
p = Pbind(
\dur, 0.2,
\sustain, 0.02,
\degree, Pwhite(-4, 8, inf),
\db, Pdefn(\db0, -20),
);
~seq = p.play;
)
Multiple Pdefn objects can be independently used and manipulated in the context of one
Pbind. In this case, care should be taken to ensure that no two Pdefns share the same name.
If they do, one will overwrite the other, much in the same way that an Event cannot contain
178 SuperCollider for the Creative Musician
two different pieces of data at the same key. It’s also possible to deploy one Pdefn in several dif-
ferent places. A change to a Pdefn propagates through all its implementations, demonstrated
in Code Example 5.41.
(
Pdefn(\deg0, Pseq([0, 4, 1, 5], inf));
p = Pbind(
\dur, 0.2,
\sustain, 0.02,
\degree, Pdefn(\deg0),
);
q = Pbind(
\dur, 0.2,
\sustain, 0.02,
\degree, Pdefn(\deg0) + 2,
);
Pdefn can be quantized using techniques similar to those previously discussed, but the syntax
is slightly different, demonstrated in Code Example 5.42. Instead of specifying quant as an
argument, each Pdefn has its own “quant” attribute, accessible by applying the quant method
to the Pdefn and setting a desired value.
(
t = TempoClock(128/60);
p = Pbind(
\dur, 1/4,
\sustain, 0.02,
\note, Pdefn(\note0, Pseq([0, \, \, \, 1, 2, 3, 4], inf)),
);
C h a p t er 5: S eq u en c i n g 179
(
Pdefn(\note0,
Pseq([7, \, 4, \, 1, \, \, \], inf)
).quant_(4);
)
Pdefn “remembers” its quantization value, so it’s unnecessary (but harmless) to re-specify this
information for subsequent Pdefn changes meant to adhere to the same value. We can nullify
a quant value by setting it to nil.
Code Example 5.43 shows a few additional techniques. Pdefn’s data can be queried with
source (optionally appending postcs for verbosity), and a Pdefn’s data can be erased with
clear. We can also iterate over all Pdefn objects to erase each one.
5.5.2 PDEF
Pdef, demonstrated in Code Example 5.44, is similar to Pdefn, and features many of the
same techniques. The primary difference is that Pdef is a proxy for an Event pattern, rather
than a value pattern. Typically, the data of a Pdef is a Pbind, but may also be a Pseq or Ppar
that represents a composite of several Pbinds. When an Event pattern is encapsulated within
a Pdef, any part of it can be changed while the Pdef is playing, without interruption of the
Event stream. Pdef is generally useful in that it avoids the need to create multiple, independent
Pdefns within a Pbind.
180 SuperCollider for the Creative Musician
(
t = TempoClock(128/60);
Pdef(\seq,
Pbind(
\dur, 0.25,
\sustain, 0.02,
\degree, Pseq([0, 2, 4, 5], inf),
)
).clock_(t).quant_(4);
Pdef(\seq).play;
)
(
Pdef(\seq, // swap the old Pbind for a new one
Pbind(
\dur, Pseq([0.5, 0.25, 0.25, 0.5, 0.5], inf),
\sustain, 0.5,
\degree, Pxrand([-4, -2, 0, 2, 3], inf),
)
);
)
In Code Example 5.44, note that we set clock and quant attributes for the Pdef before
playing it, which are retained and remembered whenever the Pdef’s source changes. All the
Pdefn methods shown in Code Example 5.43 are valid for Pdef as well.
5.5.3 PBINDEF
Pbindef is nearly the same as Pdef; it’s a proxy for Event patterns, and it retains all the same
methods and attributes of Pdef (in fact, Pbindef is a subclass of Pdef). These two classes
even share the same namespace in which their data is stored, in other words, Pdef(\x) and
Pbindef(\x) refer to the same object. The difference between these two classes is that Pbindef
allows key-value pairs in its Event pattern to be modified on an individual basis, instead
of requiring the entire Pbind source code to be present when a change is applied. Instead
of placing a Pbind inside a proxy pattern, the syntax of a Pbindef involves “merging” Pdef
and Pbind into a single entity, much in the same way that their names form a portmanteau.
C h a p t er 5: S eq u en c i n g 181
Code Example 5.45 demonstrates various features of Pbindef, including changing one or more
value-patterns in real-time, adding new value patterns, and substituting a finite value pattern
to create a fade-out. Although this example does not use the quant and clock attributes, they
can be applied to Pbindef using the same techniques that appear in Code Example 5.44.
(
Pbindef(\seqA,
\dur, Pexprand(0.05, 2, inf),
\degree, Prand([0, 1, 2, 4, 5], inf),
\mtranspose, Prand([-7, 0, 7], inf),
\sustain, 4,
\amp, Pexprand(0.02, 0.1, inf),
).play;
)
In Code Example 5.45, a finite Pgeom modifies the Event stream so that it ends after 30
Events. However, it’s possible to restart the stream by playing the Pbindef again. In this
case, it remembers its key-value pairs, and will generate another thirty Events with the same
decreasing amp pattern. Of course, we can replace the amplitude pattern with an infinite
value pattern before restarting playback, and the Event stream will continue indefinitely once
more. The semi-permanence of Pbindef’s key-value pairs can be a source of confusion, partic-
ularly if switching between different pitch or amplitude tiers. Consider the Pbindef in Code
Example 5.46. It begins with explicit freq values, which override internal calculations that
propagate upward from degree values. As a result, if we try to dynamically switch from fre-
quency to degrees, the degree values will have no effect, because the Pbindef “remembers” the
182 SuperCollider for the Creative Musician
original frequency values. Thus, if we want to switch to specifying pitch in degrees, we also
need to set the old frequency pattern to nil.
(
Pbindef(\seqB,
\dur, 0.2,
\sustain, 0.02,
\freq, Pexprand(500, 1200, inf)
).play;
)
Note
1 Two pattern-related tutorials are recommended as supplementary reading: (1) Understanding
Streams, Patterns and Events, parts 1–7 (https://doc.sccode.org/Tutorials/Streams-Patterns-Events1.
html) and (2) Patterns: A Practical Guide (https://doc.sccode.org/Tutorials/A-Practical-Guide/PG_
01_Introduction.html).
CHAPTER 6
SIGNAL PROCESSING
6.1 Overview
Signal processing effects can add richness, depth, and cohesion to synthesis- and sampling-
based sounds. In previous chapters, we’ve already seen simple examples of signal processing,
for example, using an oscillator to modulate the frequency of another oscillator, applying a
filter to alter a sound’s spectrum, or using a value-constraining operation to bend and distort
a waveform. However, these types of processing techniques are so closely tied to the signals on
which they operate that we usually consider them an inseparable part of the signal-generating
process. In this chapter, we will instead focus on signal processing techniques commonly ap-
plied to a large mix of several signals, such as delays and reverberation.
The concept of inserts versus auxiliary sends in DAW software is an apt analogy for this
distinction. When a unique effect is needed for a specific audio track, such as an equalizer
plug-in for correcting a spectral imbalance, it’s easiest to insert the effect directly onto that
track. However, when applying one effect to multiple tracks, it’s tedious, inefficient, and re-
dundant to insert identical copies of the effect on each source. At worst, these redundant
effects can overwhelm your CPU. Additionally, when using a time-extending effect (such as
echo or reverb) as an insert, it becomes slightly more difficult to adjust the level of the source
signal without also affecting the level of the processing effect. The general solution to these
problems is to establish an auxiliary send from one or more tracks and receive the signal mix
on a separate auxiliary track, where the desired effect is applied as an insert.
In SC, the equivalent of an insert effect is to bundle signal-processing UGens into an
existing SynthDef, such that the generative UGens and processing UGens are permanently
intertwined as a single algorithm. The equivalent of an auxiliary send is to construct multiple
SynthDefs in which the signal-generating and signal-processing code are fully separated. In
this case, establishing the desired signal path involves placing multiple Synths on the server
and ensuring that signal flows from one to the next.
6.2.1 BUSSES
In digital audio, a bus is a location where samples can be written and/or read, allowing signals
to be mixed and shared between separate processes. Busses are not unique to SC—they are a
staple of DAWs and also appear in other digital audio platforms. Busses also exist in analog
contexts, such as analog mixers, where they serve the same general purpose.
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.003.0006
184 SuperCollider for the Creative Musician
We have already dealt with busses in a limited capacity. Each generative SynthDef we’ve
created includes an output UGen, which specifies a bus destination, and the signal to be sent
to that bus. In the usual case of stereo audio, output busses with indices 0 and 1 correspond to
the main left/right outputs of your sound card or audio interface. This knowledge is sufficient
for simple sounds but is only part of the bigger picture.
When the audio server boots, audio busses and control busses automatically become avail-
able for general signal routing needs, such as passing a source signal to an effect. Control
signals can only be written to control busses. Audio signals can be written to either type of
bus, but they will be downsampled to the control rate if necessary. Audio and control busses
are stored in two separate global arrays and are addressable by numerical index. The default
number of available busses can be accessed using the following code:
s.boot;
A contiguous block of audio busses, starting at index zero, are reserved and designated as
“hardware” output busses, and are associated with output channels on your computer’s sound
card or external audio interface. A second, adjacent block of contiguous audio busses are re-
served and designated as hardware input busses, associated with input channels on your sound
card/audio interface. At the time of writing this book, the default is two hardware outputs
and two hardware inputs. Thus, audio busses 0 and 1 represent output channels to speakers,
and audio busses 2 and 3 represent hardware inputs, such as microphone connections. The
remaining audio busses are deemed “private” audio busses, unassociated with physical audio
hardware and freely available to the user for internal signal routing. These default values can
also be retrieved:
s.options.numOutputBusChannels; // default = 2
s.options.numInputBusChannels; // default = 2
This configuration determines the number of I/O level indicators on the server meter display,
pictured in Figure 6.1. Under default conditions, although the window displays the input
channels as having indices 0 and 1, the server internally views these channels as audio busses
whose indices are immediately above the output bus indices.
C h a p t er 6: S i g n a l Pr o c essi n g 185
FIGURE 6.1 The server meter display of hardware input/output channels, and the internal audio
busses they represent.
It’s always a good idea to configure the server’s hardware busses to mirror the input/
output setup of your sound card/audio interface. For example, if you’re using a multichannel
interface, connected to four microphones and eight speakers, you should configure the server
using the code in Code Example 6.1. These changes, like all server options, require a reboot
to take effect. Once rebooted, the meter window will automatically reflect this change when
closed and reopened.
(
s.options.numOutputBusChannels_(8);
s.options.numInputBusChannels_(4);
s.reboot;
)
186 SuperCollider for the Creative Musician
FIGURE 6.2 The appearance of the server meters after evaluating the code in Code Example 6.1 and
re-opening the meter window.
After evaluating the code in Code Example 6.1, audio busses 0 through 7 will correspond
to your eight loudspeakers, busses 8 through 11 correspond to your incoming microphone sig-
nals, and busses 12 through 1023 remain available as private busses for internal signal routing.
Control busses, by contrast, are never associated with your computer’s sound card. They are
all considered “private” busses meant for internal routing of control signals, and there are no
contiguous blocks reserved for special purposes. The remaining code in this chapter section
assumes the server input/output configuration is in its default state, which can be done using
the code in Code Example 6.2.
(
s.options.numOutputBusChannels_(2);
s.options.numInputBusChannels_(2);
s.reboot;
)
C h a p t er 6: S i g n a l Pr o c essi n g 187
(
~mixerBus = Bus.audio(s, 8);
~delayBus = Bus.audio(s, 2);
~reverbBus = Bus.audio(s, 2);
)
~delayBus.index; // -> 12
Control busses behave similarly, as shown in Code Example 6.4. The internal bus counter
begins at zero, thus the Bus objects in this example are associated with control busses 0–3,
4–5, and 6.
188 SuperCollider for the Creative Musician
(
~quadCtlBus = Bus.control(s, 4);
~stereoCtlBus = Bus.control(s, 2);
~monoCtlBus = Bus.control(s, 1);
)
~monoCtlBus.index; // -> 6
As long as the server remains booted, busses only need to be reserved once. The availability of
busses is not influenced by [cmd]+[period], freeing Synths, and so on. If any of the bus crea-
tion expressions in Code Examples 6.3 or 6.4 are evaluated a second time, the Bus class will
reserve the next available bus or block of busses. If a global variable name is reused, the new
reservation will overwrite the previous reference. The old block will still be considered “in
use” by the bus allocator, but it will no longer be addressable by a variable name. So, if enough
“accidental” re-evaluations occur, the bus allocator will exhaust its supply and post an error
message. We can forcibly trigger this error using iteration:
The internal allocation counters of the Bus class can be reset by evaluating s.newBusAllocators.
Therefore, it’s usually a good idea to precede a chunk of bus-allocating code with this expression.
As an example, no matter how many times the code in Code Example 6.5 is re-evaluated, the
variables will always reference the same busses, and the allocator will never exhaust its bus supply.
(
s.newBusAllocators; // reset bus counters
~mixerBus = Bus.audio(s, 8); // -> busses 4-11
~delayBus = Bus.audio(s, 2); // -> busses 12-13
~reverbBus = Bus.audio(s, 2); // -> busses 14-15
)
control-rate pink noise signal to busses 33 through 36, and our built-in oscilloscope provides
a real-time display. Note that the controls at the top of the oscilloscope window can be manu-
ally adjusted to view any range of audio or control busses, as an alternative to expressing these
adjustments through code.
(
x = {Out.kr(33, PinkNoise.kr(1 ! 4))}.play;
s.scope(rate: \control, numChannels: 4, index: 33);
)
x.free;
Similarly, Code Example 6.7 writes a two-channel audio-rate sine wave to audio busses 14–15,
also viewable using the oscilloscope. Under default conditions, we can see the signal on the
scope, but we don’t hear it nor see it on the meters because busses 14–15 are not associated
with hardware channels.
(
x = {Out.ar(14, SinOsc.ar([150, 151], mul: 0.2))}.play;
s.scope(rate: \audio, numChannels: 2, index: 14);
)
x.free;
Technically, there is no such thing as a multichannel bus in SC. All busses have a channel size
of one. When the allocator reserves an n-channel bus, it sets aside n monophonic busses. When
using Out to write a multichannel signal to bus n, the first channel of the signal is written to
bus n, the next channel is written to bus n + 1, and so on. There is nothing to prevent the user
from accidentally writing to or reading from a bus that is already “in use.” Proper use of the Bus
class helps avoid these conflicts, but it’s also the user’s responsibility to make sure signal channel
sizes and bus allocation sizes are consistent with each other. If you allocate a single bus, but
write a stereo signal to it, the second channel of that signal will be written to the next-highest
bus, which may produce unexpected results.
190 SuperCollider for the Creative Musician
The In UGen accepts a bus index and a number of channels. Within a UGen function, In
taps into one or more busses and becomes a source of the signal present on those busses. Code
Example 6.8 creates two Synths: ~fader reads an audio signal from ~bus, attenuates the am-
plitude of that signal, and writes the modified signal to audio busses 0 and 1. ~src generates
a two-channel sine wave and writes it to ~bus. Thus, we establish a basic signal flow in which
samples are passed from one Synth to another.
(
s.newBusAllocators;
~bus = Bus.audio(s, 2);
(
~fader = Synth(\fader, [in: ~bus, out: 0]);
~src = Synth(\src, [out: ~bus]);
s.scope(rate: \audio, numChannels: 16, index: 0); // visualize bus
activity
)
(
~src.free; // cleanup
~fader.free;
)
loudspeaker output, cycle it back through your computer, and produce a horrible
screeching sound. Using headphones, rather than loudspeakers, is one of the simplest
and most reliable ways to avoid feedback. If headphones are not available, you should
make sure your mic is placed sufficiently far away from your speakers, and also set the
microphone input gain to be relatively low. Always be ready to press [cmd]+[period],
if feedback starts to occur!
If you have an external microphone connected to your computer via an audio interface, or if
you’re using your computer’s built-in sound card and mic, it’s possible to read this live signal
into SC for monitoring and/or processing. In this case, if you’ve configured SC’s hardware I/O
channels to be consistent with the I/O capabilities of your audio hardware, then you should
see input channel activity on the server meter window when speaking into the mic. Under de-
fault conditions, if your mic is attached to the lowest-numbered input channel of your audio
hardware, it will be accessible via audio bus 2. Code Example 6.9 reads the signal on audio
bus 2, turns it into a two-channel signal via duplication, and writes it to audio busses 0 and 1,
establishing a simple microphone pass-through.
(
{ // FEEDBACK WARNING — use headphones
var sig = In.ar(bus: 2, numChannels: 1) ! 2;
Out.ar(0, sig);
}.play;
)
A microphone signal is a signal like any other, and it can be processed in all the same ways.
For example, Code Example 6.10 applies a random ring modulation effect to the mic signal.
(
{ // FEEDBACK WARNING — use headphones
var sig, freq;
sig = In.ar(2, 1) ! 2;
freq = LFNoise0.kr([8, 7]).exprange(50, 2000);
sig = sig * SinOsc.ar(freq);
Out.ar(0, sig);
}.play;
)
192 SuperCollider for the Creative Musician
By default, audio bus 2 refers to the lowest-numbered hardware input channel, but this may
not always be the case, particularly if your hardware I/O configuration changes. For this
reason, SoundIn is usually preferable to In for accessing a mic signal. SoundIn is a con-
venience UGen that automatically offsets the bus index by the number of hardware output
busses, so that providing an index of 0 always corresponds to the lowest-numbered channel
of your audio hardware device. The following two expressions are equivalent, assuming n is
a valid bus index:
SoundIn.ar(n, 1);
Code Example 6.11 rewrites the code from Code Example 6.9, substituting SoundIn
for In.
(
{ // FEEDBACK WARNING — use headphones
var sig = SoundIn.ar(bus: 0) ! 2;
Out.ar(0, sig);
}.play;
)
SoundIn does not have an argument that represents a number of channels to read (its
second and third arguments are mul/add). If, for example, you have two microphones in a
stereo arrangement, connected to the lowest input channels on your audio interface, and
want to read them into SC as a stereo signal, the bus for SoundIn should be the array
[0, 1].
FIGURE 6.3 The orientation of the node tree as represented through its graphical utility.
Before exploring a detailed example of passing a signal from one Synth to another, let’s
first examine a simpler example involving two independent generative Synths. In Code
Example 6.12, we hear both generative signals simultaneously, but the underlying process
involves more than what meets the ear. By default, a new Synth is placed at the head of
the node tree (technically, it is added to the head of the default group, discussed in the
next section). Thus, when Synth ~a is created, it becomes the headmost Synth. Synth
~b is created next, at which point it becomes headmost, nudging the other Synth down-
ward (two Synths cannot exist side-by-side). On each control cycle, SC first calculates 64
samples of ~b and writes the result to busses zero and one, then calculates 64 samples of
~a, and also writes to busses 0 and 1. By design, the Out UGen sums its output with bus
content that already exists during the same control cycle. At the end of each control cycle,
hardware bus content is sent out to audio hardware, and the process begins again for the
next control block.
(
SynthDef(\sine, { |gate = 1, out = 0|
var sig = SinOsc.ar(250, mul: 0.1 ! 2);
sig = sig * Env.asr.kr(2, gate);
Out.ar(out, sig);
}).add;
(
~a = Synth(\sine);
~b = Synth(\pink);
)
Because b + a = a + b, the order in which these Synths are created has no effect on the sound.
Generally, if no Synth relies on the output of another Synth, then order of execution is irrele-
vant, even though Synths are still processed from head to tail.
Now, consider an example of two Synths that share a signal. Code Example 6.13 allocates
a private two-channel audio bus and plays two Synths. One generates a tone blip approx-
imately every three seconds, and the other applies a reverb effect to an input signal using
FreeVerb2. Because the default behavior results in each new Synth being placed at the head
of the node tree, this code produces a Synth order that is consistent with our desired signal
flow (the reverb Synth ~b is created first, then the source Synth ~a is created). On each control
cycle, the following steps occur:
(
s.newBusAllocators;
~bus = Bus.audio(s, 2);
sig = FreeVerb2.ar(sig[0]
, sig[1], mix: 0.2, room: size);
Out.ar(out, sig);
}).add;
)
(
~b = Synth(\reverb, [in: ~bus, out: 0]);
~a = Synth(\pulses, [out: ~bus]);
)
If the Synths in Code Example 6.13 were created in the reverse order, we would hear silence
(press [cmd]+[period] and try this yourself). In this case, the following steps occur:
Thus, on each control cycle, this improper order yields a critical miscommunication between
these two Synths.
s.defaultGroup;
When creating a new node (i.e., a new Synth or Group), its target can be a Synth, a Group, the
server itself, or nil.1 If the target is the server (specified as s), the target internally becomes the
server’s default Group. If the target is nil or unspecified, it becomes the default group on the de-
fault server. By default, the localhost server is the default server, but this can be confirmed with:
There are five possible addActions, usually specified as symbols: \addToHead, \addToTail,
\addBefore, \addToAfter, and \addReplace. \addToHead and \addToTail assume the
target is a Group and place the new Node at the head or tail of that Group. \addBefore and
\addAfter place the new Node immediately before or after the target, which may be a Synth
or Group. \addReplace removes the target, which can be a Synth or Group, and replaces it
with the new Node. The default target and addAction are Server.default.defaultGroup
and \addToHead. Thus, if these parameters are unspecified, a newly created Synth or Group
will appear at the head of the default Group. Figure 6.4 shows the node trees that result from
adding a new Synth using various combinations of targets and addActions.
FIGURE 6.4 Node tree arrangements after adding a new Synth using different combinations of
targets and addActions. Each outcome begins from the same tree configuration: a user-created
Group (1000) inside the default Group, and a Synth (1001) inside that Group.
A primary application of Groups is to simplify the process of ensuring correct Synth order,
demonstrated in Code Example 6.14. For example, if we have a source sound and an effect, we
might create a “generative” Group and a “processing” Group. If these two Groups are in the
correct order, we only need to make sure each Synth is placed into the correct Group, and we
can create the two Synths in either order.
C h a p t er 6: S i g n a l Pr o c essi n g 197
(
~genGrp = Group();
~fxGrp = Group(~genGrp, \addAfter);
~a = Synth(\pulses, [out: ~bus], ~genGrp);
~b = Synth(\reverb, [in: ~bus, out: 0], ~fxGrp);
)
FIGURE 6.5 The Node tree after evaluating Code Example 6.14.
198 SuperCollider for the Creative Musician
With this Group-based approach, we can freely add more generative Synths to ~genGrp.
If their output signals are routed to ~bus, they will be processed by the reverb effect. The rel-
ative order of the generative Synths is unimportant; they only need to exist closer to the head
of the node tree than the reverb Synth. Code Example 6.15 uses a routine that creates four ad-
ditional source Synths. If a Group receives a set message, it relays that message to every node
it contains, providing a convenient way to communicate with a large collection of Synths. If
the Group contains another Group, the set message is applied recursively. If the free method
is applied to a Group, it and all of the nodes it contains are destroyed. The freeAll method,
by contrast, relays a free message to contained nodes, without freeing the Group itself. By
freeing all the Synths in the generative Group, the source sounds are removed, but the reverb
effect remains, allowing us to hear the complete decay of the reverb tail.
(
Routine({
4.do({
Synth(\pulses, [freq: exprand(300, 3000), out: ~bus], ~genGrp);
exprand(0.05, 1).wait;
});
}).play;
)
Companion Code 6.1 puts these concepts into practice by emulating the basic signal flow of
a simple analog mixer.
and pitch-shifters. At their simplest, a delay-based effect typically mixes a signal with a
delayed copy of itself. Long delay times produce discrete echoes, while very short delay
times create interference patterns at the cycle- or sample-level, which inf luences the
sound’s spectrum. Digital delay lines work by writing input samples to memory and
reading them back later. Thus, delay lines require the allocation of a buffer whose size
determines the maximum possible delay time. A delay process treats its memory buffer
“circularly,” as if the beginning and end were connected. In other words, when either
the reading or writing process of a delay reaches the end of its buffer, it wraps back to
the beginning.
(
{
var sig, delay;
Line.kr(0, 0, 1, doneAction: 2);
sig = PinkNoise.ar(0.5 ! 2) * XLine.kr(1, 0.001, 0.3);
delay = DelayN.ar(sig, 0.5, 0.5) * -6.dbamp;
sig = sig + delay;
}.play(fadeTime: 0);
)
A delay-based effect extends the durational lifespan of a sound process. In the previous ex-
ample, it would be incorrect to give XLine a terminating doneAction. Doing so would free the
Synth immediately after the source sound is complete and prevent the delay from sounding.
For this reason, a separate terminating UGen (Line) with an appropriately long duration is
needed. For the sake of brevity, most examples in this section bundle delay effects and the
source signals they process into the same UGen function. In practice, a more sensible and
efficient approach is to build separate SynthDefs for sound sources and processing effects,
and route the signal from one Synth to the other using a bus, as demonstrated in Code
Example 6.17.
200 SuperCollider for the Creative Musician
(
s.newBusAllocators;
~bus = Bus.audio(s, 2);
SynthDef(\del, {
arg in = 0, out = 0, del = 0.5, amp = 0.5;
var sig, delay;
sig = In.ar(in, 2);
delay = DelayN.ar(sig, 1, del) * amp;
sig = sig + delay;
Out.ar(out, sig);
}).add;
SynthDef(\src, {
arg out = 0;
var sig = PinkNoise.ar(0.5 ! 2);
sig = sig * XLine.kr(1, 0.001, 0.3, doneAction: 2);
Out.ar(out, sig);
}).add;
)
~del = Synth(\del, [in: ~bus, out: 0]); // create the echo effect
~del.free;
DelayN automatically allocates a buffer using real-time memory. This allocation process is dif-
ferent from allocating a Buffer object on the server. In the case of DelayN, the memory buffer
is inherently private to the delay UGen, and we cannot directly access it. There are limits to
the total amount of delay time that real-time memory can accommodate. For example, if we
attempt to instantiate a minute-long delay, the server will generate one or more “alloc failed”
messages. On some older versions of SC, the server may crash and a reboot may be required:
There are two limiting factors. The first is the amount of RAM your computer provides.
It is impossible to allocate delays whose total duration corresponds to more real-time memory
than your computer can provide. However, it is quite rare to need an amount of delay lines that
exceeds your computer’s real-time capabilities. Assuming four bytes per sample, eight gigabytes
of RAM corresponds to about 11.5 hours of monophonic audio at a sampling rate of 48,000.
C h a p t er 6: S i g n a l Pr o c essi n g 201
The second (more easily adjustable) factor is the amount of real-time memory that SC
is allowed to access. This value, measured in kilobytes, is determined by a ServerOptions
attribute named memSize. By default, this value is 8192, but can be modified like any other
server attribute and applied with a server reboot, demonstrated in Code Example 6.18. A value
of 220 (roughly one gigabyte) is a value that works well in most cases, providing ample space
for real-time buffer allocation while remaining well within the amount of real-time memory
that most computers offer.
(
s.options.memSize_(2.pow(20));
s.reboot;
)
(
{
var sig, delay;
Line.kr(0, 0, 1, doneAction: 2);
sig = PinkNoise.ar(0.5 ! 2) * XLine.kr(1, 0.001, 0.3);
delay = BufDelayN.ar(~buf, sig, 0.5) * -6.dbamp;
sig = sig + delay;
}.play(fadeTime: 0);
)
202 SuperCollider for the Creative Musician
(
{
var sig, delay, lfo;
lfo = SinOsc.kr(0.1, 3pi/2).range(0.001, 0.01);
sig = LPF.ar(Saw.ar(100 ! 2, mul: 0.2), 2000);
delay = DelayL.ar(sig, 0.01, lfo);
sig = sig + delay;
}.play;
)
Companion Code 6.2 explores the application of dynamic delay lines to create chorus and
harmonizer effects.
C h a p t er 6: S i g n a l Pr o c essi n g 203
(
{
var sig, delay;
Line.kr(0, 0, 10, doneAction: 2);
sig = PinkNoise.ar(0.5 ! 2) * XLine.kr(1, 0.001, 0.3);
delay = CombN.ar(sig, 0.1, 0.1, 9) * -6.dbamp;
sig = sig + delay;
}.play(fadeTime: 0);
)
Why is it called a “comb” filter? With relatively large delay times, a comb filter is little more
than a repetitive echo generator. But, when the delay times are very small, the feedback com-
ponent of a comb filter exaggerates the phase interference patterns, creating resonance at
frequencies whose periods are equal to the delay time (or an integer division of the delay
time), because cycles of these frequencies will perfectly align with their delayed copies, and
their amplitudes will sum to produce even greater amplitudes. In other words, the frequency
equal to the inverse of the delay time, and all harmonics of that frequency, will resonate. If
these repetitions occur frequently enough (about 20 repetitions per second or more), we will
experience a sensation of pitch. The spectrum of this resonant behavior, demonstrated in Code
Example 6.22, resembles the teeth of a comb, with prominent peaks at resonant frequencies
and valleys of phase cancellation in-between.
204 SuperCollider for the Creative Musician
(
SynthDef(\comb, { |freq = 4|
var sig, delay;
Line.kr(0, 0, 10, doneAction: 2);
sig = PinkNoise.ar(0.5 ! 2) * XLine.kr(1, 0.001, 0.3);
delay = CombN.ar(sig, 1, 1/freq, 9) * -6.dbamp;
delay = LeakDC.ar(delay);
sig = sig + delay;
Out.ar(0, sig);
}).add;
)
We can only create grains from sound that is happening now, or sound that has already
happened. Nonetheless, real-time granular synthesis provides options for decorating and
enhancing live sound in exotic ways.
GrainIn is the simplest option for live granular synthesis. It is essentially a pared-down
version of GrainBuf, designed to operate on a live signal instead of one that is read from a
buffer. As previously discussed, SoundIn and GrainIn should be deployed in two separate
SynthDefs and pass signal via a bus for optimal modularity/flexbility. In Code Example 6.23,
they are merged into one UGen function for the sake of brevity. GrainIn is somewhat boring in
isolation. Because there are no controls for playback rate or grain start position, the grains are
perfectly synchronized with the input signal, and the effect is uninteresting. In fact, GrainIn
is virtually indistinguishable from a retriggerable envelope generator. A nearly identical result,
pictured in Code Example 6.24, can be achieved using EnvGen/Env.
(
{ // FEEDBACK WARNING — use headphones
var sig = SoundIn.ar(0);
sig = GrainIn.ar(
numChannels: 1,
trigger: Dust.kr(16),
dur: 0.04,
in: sig
) ! 2;
}.play;
)
(
{ // FEEDBACK WARNING — use headphones
var sig = SoundIn.ar(0);
sig = sig * Env.sine(0.04).ar(gate: Dust.ar(16)) ! 2;
}.play;
)
GrainIn naturally pairs with delay lines, demonstrated in Code Example 6.25, which can be
inserted before granulating to desynchronize the grains from the input signal, producing a
granulated echo effect. Many variations are possible, such as adding parallel delays with dif-
ferent delay times, or using a delay line with feedback, such as CombN. The delayed signals
206 SuperCollider for the Creative Musician
can also be further processed before or after granulation (filters, amplitude modulation, re-
verb, etc.) to create a more complex effect.
(
{ // FEEDBACK WARNING — use headphones
var sig = SoundIn.ar(0);
sig = DelayN.ar(sig, 0.2, 0.2) * 0.7;
sig = GrainIn.ar(1, Dust.kr(16), 0.04, sig) ! 2;
}.play;
)
In contrast, GrainBuf provides a more sophisticated interface, though the setup is more
involved. Broadly, the process involves recording a live signal into a buffer (looping and
overwriting old samples in cyclic fashion), while simultaneously generating grains from the
same buffer. This approach necessitates a periodic ramp signal, used as a recording pointer for
Buf Wr. GrainBuf relies on this same pointer signal for determining grain start positions but
subtracts some amount so that the grain pointer consistently lags behind the record pointer by
a fraction of the buffer’s length.
Code Example 6.26 demonstrates live granulation using GrainBuf. A buffer holds the
three most recent seconds of a live microphone signal. Phasor generates the pointer signal,
which BufWr uses to index into the buffer. GrainBuf subtracts a durational value (0.2 seconds)
from this pointer signal. Converting the pointer values is the only tricky part: the Phasor signal
represents a frame count, our delay argument is measured in seconds, and GrainBuf expects a
normalized value between zero and one for the grain start position. To provide GrainBuf with
an appropriately lagged pointer signal, we must convert seconds to samples, subtract this value
from the pointer signal, and divide by the number of frames in the buffer.
(
SynthDef(\livegran, {
arg buf = 0, rate = 1, ptrdelay = 0.2;
var sig, ptr, gran;
sig = SoundIn.ar(0);
ptr = Phasor.ar(0, BufRateScale.ir(buf), 0, BufFrames.ir(buf));
BufWr.ar(sig, buf, ptr);
sig = GrainBuf.ar(
numChannels: 2,
trigger: Dust.kr(16),
C h a p t er 6: S i g n a l Pr o c essi n g 207
dur: 0.04,
sndbuf: buf,
rate: rate,
pos: (ptr - (ptrdelay * SampleRate.ir)) / BufFrames.ir(buf)
);
Out.ar(0, sig);
}).add;
)
A potential pitfall revolves around the fact that as the playback/record pointers move through
the buffer, there is always a discontinuity at the record pointer. This pointer indicates the
“now” moment, where the current sample from our live signal overwrites the oldest sample in
the buffer (see Figure 6.6). Care should be taken to avoid generating grains that contain this
discontinuity, as they may produce an audible click, producing inferior sound quality.
FIGURE 6.6 A close-up of the buffer discontinuity produced by recording new samples over old
samples.
Pitch-shifting, if not done carefully, can produce grains that overlap with this discontinuity.
Consider a grain playback rate of two, which specifies that a grain should be transposed up
one octave, pictured in Figure 6.7. To achieve the correct grain duration and playback rate,
GrainBuf extracts a section of the buffer that is twice the specified grain duration, and then
time-compresses it by a factor of two. If the time delay between the record/playback pointers is
too small, the grain will contain the discontinuity. Downward transpositions, by comparison,
are not problematic, because they extract a section of the buffer that is smaller than the desired
grain size, and time-expand it. In this case, there is no risk of including the discontinuity as
long as the specified grain duration is smaller than the distance between the record/playback
208 SuperCollider for the Creative Musician
pointers. Code Example 6.27 provides an example in which an upward transposition and
short pointer delay captures the discontinuity in each grain, producing clicky, glitchy audio.
FIGURE 6.7 A grain that contains the buffer discontinuity because of upward pitch-shifting, despite
the appearance of sufficient distance between the grain start position and the discontinuity.
(
// FEEDBACK WARNING — use headphones
b.zero;
Synth(\livegran, [buf: b, ptrdelay: 0.005, rate: 4.midiratio]);
)
C h a p t er 6: S i g n a l Pr o c essi n g 209
A general solution, pictured in Code Example 6.28, is to constrain one parameter based on the
value of another, like proportionally decreasing grain duration as grain playback rate increases.
Specifically, we can set the maximum allowable grain duration equal to pointer delay divided
by playback rate. For the actual grain duration, we consider the maximum grain duration and
the user-provided duration, and use whichever value is smaller.
(
SynthDef(\livegran, {
arg buf = 0, rate = 1, ptrdelay = 0.2;
var sig, ptr, gran, maxgraindur;
sig = SoundIn.ar(0);
ptr = Phasor.ar(0, BufRateScale.ir(buf), 0, BufFrames.ir(buf));
BufWr.ar(sig, buf, ptr);
maxgraindur = ptrdelay / rate;
sig = GrainBuf.ar(
numChannels: 2,
trigger: Dust.kr(16),
dur: min(0.04, maxgraindur),
sndbuf: buf,
rate: rate,
pos: (ptr - (ptrdelay * SampleRate.ir)) / BufFrames.ir(buf)
);
Out.ar(0, sig);
}).add;
)
(
// FEEDBACK WARNING — use headphones
b.zero;
Synth(\livegran, [buf: b, ptrdelay: 0.05, rate: 7.midiratio]);
)
A possible side-effect of this approach is that large upward pitch-shifts or small pointer delays
can produce extremely short grains, which have a “clicky” character (despite avoiding the
discontinuity) and may not be desirable. In most cases, specifying a sufficiently long delay
between the recording pointer and grain pointer—at least 0.03 seconds or so—helps provide
some wiggle room for pitch shifting and avoids excessively short grains.
210 SuperCollider for the Creative Musician
Though this GrainBuf example sounds like the GrainIn examples, it enables musical
flexibility that GrainIn cannot easily provide. Beyond pitch-shifting, this approach invites
random/nonlinear grain pointer movement for real-time “scrambling” effects and offers the
ability to pause the recording pointer to create a granular “freeze” effect. These and other ideas
are explored in Companion Code 6.4.
Notes
1 A target for a new Node may also be an integer, in which case it is interpreted as the Node ID of the
desired target. Whenever a Synth or Group is created, it automatically receives a Node ID, which is
an incremented integer that starts at 1,000 and counts upward as new Nodes are created. We rarely
provide Node IDs ourselves, and instead let the ID allocator handle this process automatically. In
many cases, we won’t know the Node ID of a Synth or Group offhand, so we don’t typically rely
Node IDs as targets. Node IDs are visible on the Node tree window and also in Figure 6.4.
2 DelayN (and other delay UGens that dynamically allocate real-time memory) automatically increase
the user-provided “maxdelaytime” argument to a value that corresponds to a number of samples
equal to the next-highest power of two. Thus, these UGens often allow delay times that are some-
what larger than the user-provided maximum. For practical reasons, though, it is sensible to treat the
maximum delay time as a true maximum, even if longer delay times are technically possible.
3 Manfred R. Schroeder, “Natural Sounding Artificial Reverberation,” Journal of the Audio Engineering
Society, Vol. 10, no. 3 (1962 July): 219–223.
CHAPTER 7
EXTERNAL CONTROL
7.1 Overview
A self-contained SC project can be compelling and expressive on its own, but typing and
evaluating code as the sole means of interaction can feel like a limiting experience. SC can
communicate with a variety of external devices, including MIDI keyboards, game controllers,
other computers running SC, and more. External controllers enhance creative options by in-
viting new modes of interaction and alternative approaches to composition, performance, and
improvisation.
7.2 MIDI
The MIDI (Musical Instrument Digital Interface) communication protocol was publicly
released in 1983 and rapidly became a centerpiece of the digital audio world. Today, it remains
a ubiquitous option for sending and receiving data, built into virtually every DAW and audio
programming environment. Though MIDI was created with music in mind, the protocol is
largely music-agnostic; MIDI messages are merely control data that exist as sequences of bytes,
with no active knowledge of the sound they may be controlling. It’s the responsibility of a
receiving device to translate MIDI data into relevant musical actions. This flexibility allows
MIDI to be used in many different contexts, some of which are completely unrelated to sound.
The full collection of MIDI messages is relatively large, but in practical use, only a
handful of “channel voice” messages are encountered. These messages include note-on, note-
off, control change (CC), program change, pitch bend, and aftertouch. Even within this small
collection, note-on/off and CC are arguably the most commonly used. A data component of a
message contains 7 bits, which represent 27 = 128 discrete values. Thus, there are 128 possible
MIDI note numbers, 128 possible note velocities, 128 unique controller numbers, and so on.
Despite this relatively low resolution, the dataspace provided by MIDI is sufficient to accom-
modate many different types of projects.
MIDIClient.init;
which makes SC aware of available MIDI devices and displays a list of MIDI sources and
destinations in the post window. From here, the quickest way to connect to all available MIDI
sources is to evaluate:
MIDIIn.connectAll;
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.003.0007
212 SuperCollider for the Creative Musician
MIDIFunc.trace(true);
MIDIFunc.trace(false);
TABLE 7.1 MIDIdef creation methods, their expected function arguments, and interpreted meaning
of value/number arguments.
s.boot;
MIDIIn.connectAll;
(
MIDIdef.noteOn(\simpleNotes, {
|val, num, chan, src| // chan & src declared but not used
{
var sig, freq, amp;
freq = num.midicps;
amp = val.linexp(1, 127, 0.01, 0.25);
sig = SinOsc.ar(freq * [0, 0.1].midiratio) * amp;
sig = sig * Env.perc.kr(2);
}.play;
});
)
MIDIdef is a subclass of MIDIFunc, from which it inherits its functionality. MIDIFunc can
be used to achieve the same results as MIDIdef with a slightly different syntax. The benefit
of MIDIdef is that each instance can be dynamically replaced with a new MIDIdef, without
inadvertently creating a duplicate. For instance, if you replace the SinOsc in Code Example 7.1
with a different oscillator and re-evaluate the code, a new MIDIdef replaces the older one, and
is stored at the same key.
Once created, a MIDIdef can be referenced by its key, using the same proxy syntax that
appears in Code Example 5.38:
MIDIdef(\simpleNotes);
MIDIdef(\simpleNotes).disable; // bypassed
MIDIdef(\simpleNotes).enable; // reactivated
By default, [cmd]+[period] destroys MIDIdefs, which is sometimes but not always desirable.
A MIDIdef can be made to survive [cmd]+[period] by setting its permanent attribute to true
(similar to making a TempoClock permanent):
A MIDIdef can be permanently destroyed with free. If multiple MIDIdef objects exist, they
can all be destroyed at once using the class method freeAll:
When working with MIDI note messages, we typically want a MIDIdef to respond to all
note numbers, so that the entire keyboard of a piano-type controller is playable. With control
change messages, however, we usually want more selective behavior, that is, a MIDIdef that
only responds to one specific controller, rather than all possible CC messages. To filter out
undesirable messages, we can provide an integer as a third MIDIdef argument. This value is
compared against incoming messages. If the number of an incoming message number does not
match, the message is ignored. This argument can also be an array of integers, in which case
the MIDIdef ignores any message whose number does not match any number contained in
the array. It’s possible to create the same filtering behavior by implementing conditional logic
inside the MIDIdef function, but providing a message-filtering argument is usually simpler.
Code Example 7.2 creates a second MIDIdef that updates a cutoff frequency whenever a CC
message from controller number one is received. The cutoff is incorporated into the note-on
MIDIdef, which passes a sawtooth wave through a resonant low-pass filter. The modulation
wheel, a standard feature on many keyboard controllers, is conventionally designated con-
troller number one. If your keyboard has a mod wheel, it will likely influence the filter value
in Code Example 7.2. This example is relatively simple in that the cutoff value is only applied
at the moment a new Synth is created; moving the mod wheel does not influence Synths that
have already been created and are currently sounding (Companion Code 7.1 demonstrates
how to dynamically influence existing Synths with real-time control data).
(
~filtCutoff = 200;
MIDIdef.cc(\filtControl,
{
|val, num, chan, src|
~filtCutoff = val.linexp(1, 127, 200, 10000);
}, ccNum: 1 // only respond to CC#1 messages
);
MIDIdef.noteOn(\simpleNotes,
{
|val, num, chan, src|
{
C h a p t er 7: E x t er n a l C o n t r o l 215
arg cf = 200;
var sig, freq, amp;
freq = num.midicps;
amp = val.linexp(1, 127, 0.01, 0.25);
sig = Saw.ar(freq * [0, 0.1].midiratio) * amp;
sig = RLPF.ar(sig, cf, 0.1);
sig = sig * Env.perc.kr(2);
}.play(args: [\cf, ~filtCutoff]);
}
);
)
(
b = Buffer.read(s, Platform.resourceDir ++ "/sounds/a11wlk01.wav");
MIDIdef.freeAll;
MIDIdef.noteOn(\sampler, {
|val, num, chan, src|
{
var sig, rate, amp;
rate = (num - 60).midiratio;
amp = val.linexp(1, 127, 0.1, 0.7);
sig = PlayBuf.ar(
1, b, BufRateScale.ir(b) * rate, startPos: 85000
);
sig = sig * Env.perc.kr(2) * amp ! 2;
}.play;
});
)
Although MIDI messages are often used to make a MIDI controller to behave like a musical
instrument, this is not a requirement. The action function of a MIDIdef can contain any
valid SC code. The MIDIdef in Code Example 7.4 will post a random number in response to
note 60, play a short noise burst in response to note 61, and note 62 will quit the audio server!
Messages from other note numbers are ignored.
216 SuperCollider for the Creative Musician
(
MIDIdef.freeAll;
MIDIdef.noteOn(\weird, {
|val, num, chan, src|
case
{num == 60} {exprand(1, 100).postln}
{num == 61} {
{PinkNoise.ar(0.1 ! 2) * Line.kr(1, 1, 0.1, doneAction:2)}.play
}
{num == 62} {s.quit};
}, noteNum: [60, 61, 62]
);
)
MIDI controllers come in many varieties; the examples presented here may require some
tweaking to work with your specific MIDI device. Incorporating additional features, like lis-
tening for note-off messages and applying pitch bend information, requires a bit more work
and is explored in Companion Code 7.1. In contrast, Companion Code 7.2 demonstrates a
fundamentally different application: using a MIDIdef to facilitate text entry of pitch informa-
tion into arrays and patterns.
MIDIClient.init;
MIDIClient.destinations;
Each destination will have a device name and a port name, both represented as strings. Code
Example 7.5 shows an example of what a destinations array might look like. Some devices may
have multiple input ports, in which case each port will appear as a unique destination.
C h a p t er 7: E x t er n a l C o n t r o l 217
-> [
MIDIEndPoint("IAC Driver", "IAC Bus 1"),
MIDIEndPoint("IAC Driver", "IAC Bus 2"),
MIDIEndPoint("UltraLite mk3 Hybrid", "MIDI Port 1")
MIDIEndPoint("UltraLite mk3 Hybrid", "MIDI Port 2")
MIDIEndPoint("OSCulator In (8000)", "OSCulator In (8000)"),
MIDIEndPoint("Oxygen 49", "Oxygen 49"),
]
Alternatively, a MIDIOut object can be specified via new, along with the array index of the
desired destination. However, the array order may be different if your setup changes, so this
is not as reliable as newByName1:
After creating a new instance of MIDIOut, we can apply instance methods to generate and
send messages to the target device. For instance, the following line will generate a note-on
message on channel 0, corresponding to note number 72 with a velocity of 50 (keep in mind
that most receiving devices envision MIDI channels as being numbered 1–16, and will in-
terpret channel n from SC as equivalent to channel n + 1. If the receiving destination is a
sound-producing piece of hardware or software, and is actively “listening” for MIDI data, the
following line should play a sound:
At this point, from the perspective of the MIDI destination, the situation is no different from
a user holding down key 72. This imaginary key can be “released” by sending an appropriate
note-off message (note that the release velocity may be ignored by some receiving devices):
(
// assumes 'm' is an appropriate instance of MIDIOut
r = Routine({
inf.do({
var note = rrand(40, 90);
m.noteOn(0, note, exprand(20, 60).asInteger);
(1/20).wait;
m.noteOff(0, note);
(3/20).wait;
});
}).play;
)
r.stop;
This routine-based approach works well enough, but if the routine is stopped between
a note-on message and its corresponding note-off, the result is a “stuck” note. Obviously,
[cmd]+[period] will have no effect because the SC audio server is not involved. One solution is
to use iteration to send all 128 possible note-off messages to the receiving device:
This solution can be enhanced by encapsulating this expression in a function and registering
it with the CmdPeriod class, so that the note-off action is performed whenever [cmd]+[period]
is pressed. This action can be un-registered at any time by calling remove on CmdPeriod (see
Code Example 7.7).
(
~allNotesOff = {
"all notes off".postln;
(0..127).do({ |n| m.noteOff(0, n) });
};
CmdPeriod.add(~allNotesOff);
)
Events provide a more elegant interface for sending MIDI messages to external devices. In
Chapter 5, we introduced note- and rest-type Events. There is also a built-in midi type event,
whose keys include \midiout and \midicmd, which specify the instance of MIDIOut to
be used, and the type of message to be sent. We can view valid options for \midicmd by
evaluating:
Event.partialEvents.midiEvent[\midiEventFunctions].keys;
The type of MIDI message being sent determines additional keys to be specified. For example,
if we specify \noteOn as the value for the \midicmd key, the Event will also expect \chan,
\midinote, \amp, \sustain, and \hasGate. A value between 0 and 1 should be provided for
\amp, which is automatically multiplied by 127 before transmission. When \hasGate is true
(the default), a corresponding note-off message is automatically sent after \sustain beats have
elapsed. If \hasGate is false, no note-off message will be sent. Code Examples 7.8 and 7.9
demonstrate the use of Events to send MIDI data to an external destination.
(
(
type: \midi,
midiout: m,
midicmd: \noteOn,
chan: 0,
midinote: 60,
amp: 0.5,
sustain: 2 // note-off sent 2 beats later
).play;
)
(
t = TempoClock.new(108/60);
p = Pbind(
\type, \midi,
\dur, 1/4,
\midiout, m,
\midicmd, \noteOn,
220 SuperCollider for the Creative Musician
\chan, 0,
\midinote, Pseq([60, 72, 75],inf),
\amp, 0.5,
\sustain, 1/8,
);
~seq = p.play(t);
)
~seq.stop;
7.3 OSC
Open Sound Control (OSC) is a specification for communication between computers,
synthesizers, and other multimedia devices, developed by Matt Wright and Adrian Freed at
UC Berkeley CNMAT in the late 1990s, and first published in 2002.2 Designed to meet the
same general goals of MIDI, it allows devices to exchange information in real-time, but offers
advantages in its customizability and flexibility. In contrast to MIDI, OSC supports a greater
variety of data types, and includes high-resolution timestamps for temporal precision. OSC
also offers an open-ended namespace using URL-style address tags instead of predetermined
message types (such as note-on, control change, etc.), and is optimized for transmission over
modern networking protocols. It’s a common choice for projects involving device-to-device
communication, such as laptop ensembles, multimedia collaborations, or using a smartphone
as a multitouch controller.
Little needs to be known about the technical details of OSC to use it effectively in SC.
To send a message from one device to another, the simplest option is for both devices to be
on the same local area network, which helps avoid security and firewall-related obstacles. The
sending device needs to know the IP address of the receiving device, as well as the network
port on which that device is listening. The structure of an OSC message begins with a URL-
style address, followed by one or more pieces of data. Code Example 7.10 shows an example
of an OSC message, as it would be displayed in SC, which includes an address followed by
three numerical values.
Since its creation, OSC has been incorporated into many creative audio/video soft-
ware platforms. SC, in particular, is deeply intertwined with OSC. In addition to being
C h a p t er 7: E x t er n a l C o n t r o l 221
able to exchange OSC with external devices, OSC is also how the language and server
communicate with each other. Any code that produces a server-side reaction (adding a
SynthDef, allocating a Buffer, creating a Synth, etc.) is internally translated into OSC
messages.
NetAddr.langPort;
The following NetAddr represents the instance of the SC language running on your computer:
OSCdef is a primary class for receiving OSC messages, generally similar to MIDIdef in
behavior and design. At minimum, an OSCdef expects (1) a symbol, serving as a unique
identifier for the OSCdef, (2) a function, evaluated in response to a received OSC message,
and (3) the OSC address against which incoming messages are compared (non-matching
addresses are ignored). Inside the function, four arguments can be declared: the OSC message,
a timestamp, a NetAddr representing the sending device, and the port on which the message
was received. Often, we only need the first of these four arguments. OSC addresses begin with
a forward slash, and symbols in SC often begin with a backslash. SC can’t parse this combina-
tion of characters, which is why symbols are typically expressed using single-quote enclosures
in the context of OSC.
\/test; // invalid
'/test'; // valid
Code Example 7.11 shows the essentials of sending an OSC message to/from the SC lan-
guage. After creating instances of NetAddr and OSCdef, we can send an OSC message to
ourselves with sendMsg, and including the OSC address tag, followed by any number of
comma-separated pieces of data. The message is represented as an array when received by
the OSCdef, so we can use an expression like msg[2]to access a specific piece of data within
the message.
222 SuperCollider for the Creative Musician
(
~myself = NetAddr("127.0.0.1", NetAddr.langPort);
OSCdef(\receiver,
{
|msg, time, addr, port|
("random value is " ++ msg[2]
).postln;
},
'/test'
);
)
x.set(\gate, 0);
By default, the audio server listens for OSC messages on port 57110, and automatically has
an instance of NetAddr associated with it, accessible via s.addr. The server is programmed to
C h a p t er 7: E x t er n a l C o n t r o l 223
create a new Synth in response to messages with the ‘/s_new’ address. Following the address, it
expects the SynthDef name, a unique node ID (assigned automatically when using the Synth
class), an integer representing an addAction (0 means \addToHead), the node ID of the target
Group or Synth (the default Group has a node ID of 1), and any number of comma-separated
pairs, representing argument names and values. Messages with the ‘/n_set’ address tag are
equivalent to sending a set message to a Node (either a Synth or Group). It requires the node
ID, followed by one or more argument-value pairs. Code Example 7.13 performs the same
actions as Code Example 7.12, but builds the OSC messages from scratch. Note that we don’t
need to use s.addr to access the NetAddr associated with the server; sendMsg can be directly
applied to the instance of the localhost server.
Documentation of OSC messages the server can receive are detailed in a help file titled “Server
Command Reference.” Keep in mind it’s almost always preferable to use Synth and other
language-side classes that represent server-side objects, rather than manually writing and
sending OSC messages. The primary goal of this section is simply to demonstrate the useful-
ness and uniformity of the OSC protocol.
(
{
var sig, freq, trig;
trig = Impulse.kr(20);
freq = LFDNoise3.kr(0.2).exprange(200, 1600);
SendReply.kr(trig, '/freq', freq);
sig = PinkNoise.ar(1 ! 2);
sig = BPF.ar(sig, freq, 0.02, 4);
}.play;
)
(
OSCdef(\getfreq, {
arg msg;
~freq = msg[3]
.postln;
}, '/freq');
)
(
{ // evaluate repeatedly
var sig = SinOsc.ar(~freq * [0, 0.2].midiratio);
sig = sig * Env.perc.kr(2) * 0.2;
}.play;
)
(
s.freeAll; // cleanup
OSCdef.freeAll;
)
controller. Though the design specifics for these platforms vary, general OSC principles re-
main the same. In the TouchOSC Mk1 editor software, for example, the OSC address and
numerical data associated with a graphical widget appears in the left-hand column when it is
selected (see Figure 7.1). In this example, we have a knob that sends values between 0 and 1,
which arrive with the ‘/1/rotary1’ OSC address.
FIGURE 7.1 A screenshot of the TouchOSC Mk1 Editor software, displaying a graphical knob that
transmits values that have the address “/1/rotary1.”
To send OSC to SC from TouchOSC on a mobile device, it should be on the same local
network as your computer and needs the IP address and port of your computer, which it
considers to be its “host” device. This information can be configured in the OSC settings page
of the mobile TouchOSC app, pictured in Figure 7.2 (note that the self-referential IP address
“127.0.0.1” cannot be used here, since we are dealing with two separate devices). The final
step is to create an OSCdef to receive the data, shown in Code Example 7.15, after which data
should appear in the post window when interacting with the TouchOSC interface. To send
OSC data from SC to TouchOSC, we create an instance of NetAddr that represents the mo-
bile device running TouchOSC, and send it a message, for example, one that randomly moves
the knob. If OSC data doesn’t appear, OSCFunc.trace(true) can be used as a debugging
tool that prints all incoming OSC messages. Accidentally mistyping the OSC address or IP
address is a somewhat common source of problems that may likely cause OSC transmission
to fail.
226 SuperCollider for the Creative Musician
FIGURE 7.2 OSC configuration settings in the TouchOSC Mk1 mobile app.
(
OSCdef(\fromTouchOSC, { |msg|
"data received: ".post;
msg[1].postln;
}, '/1/rotary1');
)
accommodate a larger collection of addresses and data, or include conditional logic in an OSCdef
in order to create different logical branches. Data “sanitization” may be appropriate in some cases;
in Code Example 7.16, the receiving device uses conditional logic and clip to avoid bad values.
(
// On the receiving computer:
OSCdef (\receiver, { |msg|
var freq;
if(msg[1].isNumber, {
freq = msg[1]
.clip(20,20000);
{
var sig = SinOsc.ar(freq) * 0.2 ! 2;
sig = sig * XLine.kr(1, 0.0001, 0.25, doneAction:2);
}.play(fadeTime:0);
});
}, '/makeTone');
)
~sc.sendMsg('/makeTone', 500);
Though it’s possible to send OSC data from the language to an audio server running on a
separate computer, this may not necessarily be the simplest option, for reasons explained in
Section 7.3.2. From a practical perspective, you may find that sending data from language to
language is a more straightforward option (albeit less direct), in which the receiving computer
creates Synths and other server-related actions within the function of an OSCdef. For the in-
terested reader, information on sending OSC directly to a remote audio server can be found
in a pair of guide files in the help documentation, titled “Server Guide” and “Multi-client
Setups.”
FIGURE 7.3 A simple Arduino program that writes analog input values to its serial output port.
A random segment of the data stream resulting from the Arduino code in Figure 7.3 will
look something like this:
...729a791a791a792a793a792a...
C h a p t er 7: E x t er n a l C o n t r o l 229
Once your controller is connected to your computer’s USB port, you can evaluate
SerialPort.devices to print an array of strings that represent available serial port devices.
Using this information, the next step is to create an instance of SerialPort that connects to the
appropriate device, by providing the name and baudrate. Identifying the correct device name
may involve some trial-and-error if multiple devices are available:
The final step, shown in Code Example 7.17, is to build a looping routine that retrieves data
from the serial port by repeatedly calling read, and storing the returned value in a global
variable. The details may involve some experimentation, depending on how the transmitted
data is formatted. For example, when an Arduino sends a number to the serial port, it writes
the data as ASCII values that represent each character in sequence. So, if the Arduino sends
a value of 729, it arrives as the integers 55, 50, and 57, which are the ASCII identifiers for the
characters “7,” “2,” and “9.” In each routine loop, we read the next value from the serial port
and convert it to the symbol it represents, and then we perform a conditional branch based
on whether it is a number or the letter “a.” If it’s a number, we add the number to an array. If
it’s the letter “a,” we convert the character array to an integer and empty the array. While the
routine plays, ~val is repeatedly updated with the latest value from the USB port, which can
be used elsewhere in your code. Interestingly, we have a looping routine with no wait time,
yet SC does not crash when it runs. This is because read pauses the routine thread while
waiting for the Arduino to send its next value. Because the Arduino waits for one millisecond
between sending successive values (see Figure 7.3), this wait time is effectively transferred into
the routine.
(
// assumes ~port is a valid instance of SerialPort
var ascii = 0, chars = [];
r = Routine({
loop{
ascii = ~port.read.asAscii;
if(ascii.isDecDigit) {chars = chars.add(ascii)};
if(ascii == $a) {
~val = chars.collect({ |n| n.digit}).convertDigits;
chars = [];
};
};
}).play;
)
230 SuperCollider for the Creative Musician
(
HID.findAvailable;
HID.postAvailable;
)
If, for example, you have an external mouse connected to your computer, information about
that device should appear, and may look something like this:
A connection to an HID can be established using open, and specifying the device’s Vendor
ID and Product ID:
Keep in mind your device may cease its normal functions while this connection remains
open! A connection can be closed with the close method, or by using the class method
closeAll:
~device.close;
HID.closeAll;
An HID encompasses some number of HID elements, which represent individual aspects of
the device, such as the state of a button, or the horizontal position of a joystick. Once a con-
nection has been opened, the following statement will print a line-by-line list of all the device’s
elements:
C h a p t er 7: E x t er n a l C o n t r o l 231
For instance, a gaming mouse with several different buttons may print something like this:
...etc...
Figuring out which elements correspond to which features may involve some trial-and-
error. The process begins by creating an HIDdef, much like creating a MIDIdef/OSCdef.
An HIDdef ’s action function expects a relatively large argument list, but the first two
values (the normalized value and raw value of the element) are usually the most relevant.
An integer, which follows the function as the third argument for HIDdef, determines
the element to which the HIDdef is listening. This message-filtering behavior is essen-
tially the same as getting the first MIDIdef in Code Example 7.2 to respond only to the
mod wheel.
(
HIDdef.element(\getElem0, { |val, raw|
[val, raw].postln;
}, elID: 0 // only respond to element 0
);
)
Once an HID element has been identified, the HIDdef function can be dynamically changed
and re-evaluated to accommodate the needs of a particular project. Like MIDIdef and
OSCdef, an HIDdef can be destroyed with free, or all HIDdef objects can be collectively
destroyed with HIDdef.freeAll.
Notes
1 For Linux users, additional steps are required to connect an instance of MIDIOut to a target desti-
nation, because of how MIDI is handled on this operating system. Detailed information is available
in the MIDIOut help file, under a section titled “Linux specific: Connecting and disconnecting
232 SuperCollider for the Creative Musician
ports.” This section of the help file recommends reliance on MIDIOut.newByName for maximizing
cross-platform compatibility.
2 Matthew J. Wright and Adrian Freed, “Open SoundControl: A New Protocol for Communicating
with Sound Synthesizers,” in Proceedings of the 1997 International Computer Music Conference,
Proceedings of the International Computer Music Association (San Francisco: International
Computer Music Association, 1997).
CHAPTER 8
8.1 Overview
A Graphical User Interface (GUI) refers to an arrangement of interactive objects, like knobs
and sliders, that provides a system for controlling a computer program and/or displaying in-
formation about its state. Because SC is a dynamically interpreted language that involves real-
time code evaluation, a GUI may not always be necessary, and may even be a hindrance in
some cases. In other situations, building a GUI can be well worth the effort. If you are reverse-
engineering your favorite hardware/software synthesizer, emulating an analog device, sending
your work to a collaborator who has minimal programming experience, or if you simply want
to conceal your scary-looking code, a well-designed GUI can help.
Older versions of SC featured a messy GUI “redirect” system that relied on platform-
specific GUI classes that were rife with cross-platform pitfalls. Since then, the SC development
community has unified the GUI system, which now uses Qt software on all supported oper-
ating systems, resulting in more simplicity and uniformity.
Window.new().front;
On creation, a new window accepts several arguments, shown in Code Example 8.1. The first
argument is a string that appears in the window’s title bar. The second, bounds, is an instance
of Rect (short for rectangle) that determines the size and position of the window relative to
your computer’s screen. A new Rect involves four integers: the first two determine the hori-
zontal/vertical pixel distance from the bottom-left corner of your screen, and the second two
integers determine pixel width/height. Two additional Booleans determine whether a window
can be resized and/or moved. Setting these to false prevents the user from manipulating the
window with the mouse, which is useful when the window is meant to remain in place.
Experimenting with these arguments (especially the bounds) is a great way to understand how
they work. A window can be destroyed with close, and multiple windows can be closed with
Window.closeAll.
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.003.0008
234 SuperCollider for the Creative Musician
(
w = Window(
name: "Hello World!",
bounds: Rect(500, 400, 300, 400),
resizable: false,
border: false
).front;
)
w.close;
The class method screenBounds returns a Rect corresponding to the size of your computer
screen. By accessing certain attributes of this Rect, a window can be made to appear in a con-
sistent location, irrespective of screen size. The instance method alwaysOnTop can be set to a
Boolean that determines whether the window will remain above other windows, regardless of
the window that currently has focus. Making this attribute true keeps the window visible while
working in the IDE, which can be desirable during GUI development to avoid having to click
back and forth between windows. Code Example 8.2 demonstrates the use of these two methods.
(
w = Window(
"A Centered Window",
Rect(
Window.screenBounds.width / 2 - 150,
Window.screenBounds.height / 2 - 200,
300,
400
)
)
.alwaysOnTop_(true)
.front;
)
8.2.2 VIEWS
View is the parent class of most recognizable/useful GUI classes, such as sliders, buttons,
and knobs, and it’s also the term used to describe GUI objects in general. Table 8.1 provides
C h a p t er 8: G r a ph i ca l Us er I n t er fac es 235
a descriptive list of commonly used subclasses. The View class defines core methods and
behaviors, which are inherited by its subclasses, establishing a level of behavioral consistency
across the GUI library. At minimum, a new view requires a parent view, that is, the view on
which it will reside, and a Rect that determines its bounds, relative to its parent. Unlike win-
dows, which are positioned from the bottom-left corner of your screen, a view’s coordinates
are measured from the top-left corner of its parent view. If two views are placed on a window
such that their bounds intersect, the view created second will be rendered on top of the first,
partially obscuring it (this may or may not be desirable, depending on context). Once a view
is created, it can be permanently destroyed with remove. Code Example 8.3 demonstrates
these techniques by placing a slider on a parent window. Note that the rectangular space on
the body of a window is itself a type of view (an instance of TopView, accessible by calling the
view method on the window). In the Qt GUI system, the distinction between a window and
its TopView is minimal, and both are valid parents.
(
w = Window("A Simple Slider", Rect(500, 400, 300, 400))
.alwaysOnTop_(true).front;
x = Slider(w, Rect(40, 40, 40, 320));
)
TABLE 8.1 A list of commonly used View classes and their descriptions.
(
Window("Layout Management", Rect(100, 100, 250, 500)).front
.layout_(
VLayout(
HLayout(Knob(), Knob(), Knob(), Knob()),
HLayout(Slider(), Slider(), Slider(), Slider()),
Slider2D(),
Button()
)
);
)
view has a visible attribute, which determines whether it will be displayed, and an enabled
attribute, which determines whether the user can interact with the view. Most views also
have a background attribute, which determines a background color. As a reminder: to “get”
an attribute, we call the method, and the attribute’s value is returned. To “set” an attribute
to a new value, we can assign the value using an equals symbol or the underscore syntax,
both demonstrated in Code Example 8.5. Recall that the underscore setter is advantageous
because it returns the receiver and lets us chain setter commands back-to-back in a single ex-
pression. Note that the underscore syntax and setter-chaining have already appeared in Code
Example 8.2.
(
~slider = Slider();
w = Window("A Slider", Rect(500, 400, 100, 400)).front
.alwaysOnTop_(true)
.layout_(HLayout(~slider));
)
As a side note, color is expressed using the Color class, which encapsulates four floats be-
tween 0 and 1. The first three are red, green, and blue amounts, and the fourth is a transpar-
ency value called alpha. The alpha value defaults to 1, which represents full opacity, while 0
represents full transparency.
s.boot;
(
~amp = 0.3;
~synth = { |amp, on = 0|
var sig = LFTri.ar([200, 201], mul: 0.1);
sig = sig * amp.lag(0.1) * on;
}.play(args: [amp: ~amp]);
~slider = Slider()
.value_(~amp)
.action_({ |v|
~amp = v.value;
~synth.set(\amp, ~amp);
});
~button = Button()
.states_([
["OFF", Color.gray(0.2), Color.gray(0.8)],
["ON", Color.gray(0.8), Color.green(0.7)]
])
.action_({ |btn| ~synth.set(\on, btn.value) });
C h a p t er 8: G r a ph i ca l Us er I n t er fac es 239
The code in Code Example 8.6 has a few noteworthy features. Our button has two states,
defined by setting the states attribute equal to an array containing one internal array for
each state. Each internal array contains three items: a string to be displayed, the string color,
and the background color. The lag method is applied to the amplitude argument, which
wraps the argument in a Lag UGen, whose general purpose is to smooth out discontinuous
changes to a signal over a time interval (in this case, a tenth of a second). Without lagging,
any large, instantaneous, or fast changes to the slider may result in audible pops or “zipper”
noise (if you remove the lag, re-evaluate, and rapidly move the slider with the mouse, you’ll
hear a distinct “roughness” in the sound). Finally, the onClose attribute stores a function to
be evaluated when the window closes, useful for ensuring sound does not continue after the
window disappears.
Although all views understand value, not all views respond to this method in a mean-
ingful or useful way. Some classes rely on one or more alternative method calls. For instance,
text-oriented objects such as StaticText and TextView consider their text to be their “value,”
which they return in response to string. Likewise, Slider2D returns its values as two inde-
pendent coordinates through the methods x and y.
Companion Code 8.1 combines several of the techniques presented thus far and expands
upon Table 8.1 by providing an interactive tour of several different types of views.
8.2.6 RANGE-MAPPING
A numerical range between 0 and 1 is acceptable for signal amplitude but unsuitable for fre-
quency, MIDI data, decibels, and many other parameters. Even when the default range is suit-
able, the inherent linear behavior of sliders and knobs may not be. Suppose we want our slider
to control frequency instead of amplitude. To produce a sensible frequency range, one option
is to apply a range-mapping method such as linexp before the value is passed to a signal al-
gorithm. Range-mapping can alternatively be handled with ControlSpec, a class designed to
map values back and forth between 0 and 1, and another custom range, using the methods
map and unmap. A ControlSpec requires a minimum value, a maximum value, and a warp
value, which collectively determine its range and curvature. The warp value may be a symbol
(e.g., \lin, \exp), or an integer (similar to Env curve values). Code Example 8.7 demonstrates
the use of ControlSpec.
240 SuperCollider for the Creative Musician
(
~freqspec = ControlSpec(100, 2000, \exp);
~freq = ~freqspec.map(0.2);
~synth = { |freq, on = 0|
var sig = LFTri.ar(freq.lag(0.1) + [0, 1], mul: 0.05);
sig = sig * on;
}.play(args: [freq: ~freq]);
~slider = Slider()
.value_(0.2)
.action_({ |v|
~freq = ~freqspec.map(v.value);
~synth.set(\freq, ~freq);
});
~button = Button()
.states_([
["OFF", Color.gray(0.2), Color.gray(0.8)],
["ON", Color.gray(0.8), Color.green(0.7)]
])
.action_({ |btn| ~synth.set(\on, btn.value) });
ControlSpec.specs.keys;
As an exercise to help you understand these fundamental techniques more deeply, con-
sider combining the frequency and amplitude GUIs in Code Examples 8.5 and 8.6 into a
single GUI with a pair of sliders and an on/off button.
The sixth argument is described in the SC help documents as being the “most reliable way to
check which key was pressed.” Mouse functions accept arguments that represent some com-
bination of the following:
One of the best ways to understand these functions and their arguments is to create an empty
view that posts argument values, as demonstrated in Code Example 8.8.
242 SuperCollider for the Creative Musician
Method Description
(
w = Window("Keyboard and Mouse Data").front
.layout_(VLayout(
StaticText()
.align_(\center)
.string_("press keys/click the mouse")
));
As a practical example of incorporating keystrokes into a GUI, Companion Code 8.2 builds a
virtual piano keyboard that can be played using the computer keyboard.
(
MIDIIn.connectAll;
w = Window("MIDI Control").front
.layout_(VLayout(
StaticText().align_(\center)
.string_("press a key on your MIDI controller"),
~numbox = NumberBox().align_(\center)
.enabled_(false)
.font_(Font("Arial", 40));
));
Companion Code 8.3 reinforces these techniques by creating an interface that can dynami-
cally “learn” MIDI inputs and assign their data to control specific GUI objects.
counter has reached a threshold, the button is re-enabled. As an alternative to wrapping GUI
code in a deferred function, we can explicitly play the routine on the AppClock. Also, note
that the val counter serves two purposes: it is part of the conditional logic that determines
when to exit the while loop and simultaneously influences the button’s background color
during the cooldown period.
(
~button = Button()
.states_([["Click Me", Color.white, Color(0.5, 0.5, 1)]])
.action_({ |btn|
btn.enabled_(false);
Routine({
var val = 1;
while({val > 0.5}, {
btn.states_([
[
"Cooling down...",
Color.white,
Color(val, val, 1)
]
]);
val = val - 0.01;
0.05.wait;
}
);
btn.enabled_(true);
btn.states_([
[
"Click Me",
Color.white,
Color(val, val, 1)
]
]);
}).play(AppClock);
});
w = Window("Cooldown Button", Rect(100, 100, 300, 75)).front
.layout_(VLayout(~button));
)
Companion Code 8.4 reinforces these timing techniques by creating a simple stopwatch.
C h a p t er 8: G r a ph i ca l Us er I n t er fac es 245
(
u = UserView().background_(Color.gray(0.2))
.drawFunc_({
Pen.width_(2) // set Pen characteristics
.strokeColor_(Color(0.5, 0.9, 1))
.addArc(100 @ 100, 50, 0, 2pi) // construct a circle
.stroke; // render the circle (draw border, do not fill)
Pen.width_(6)
.fillColor_(Color(0.2, 0.8, 0.2))
.moveTo(90 @ 250) // construct a triangle, line-by-line
.lineTo(210 @ 320).lineTo(90 @ 320).lineTo(90 @ 250)
.fillStroke; // render the triangle (fill and draw border)
Pen.width_(2)
.strokeColor_(Color(1, 0.5, 0.2));
8.do({
Pen.line(280 @ 230, 300 @ 375);
Pen.stroke;
Pen.translate(10, -5); // 'translate' modifies what Pen
perceives as its origin by a horizontal/vertical shift. You can
imagine translation as shifting the paper underneath a pen.
});
});
w = Window("Pen", Rect(100, 100, 450, 450))
.front.layout_(HLayout(u));
)
Companion Code 8.5 explores these techniques by creating a custom transport control
interface (i.e., a bank of buttons with icons representing play, pause, stop, etc.).
8.4.2 ANIMATION
A UserView’s drawFunc executes when the window is first made visible and can be re-executed
by calling refresh on the UserView. If a drawFunc’s elements are static (as they are in Code
Example 8.11), refreshing has no discernible effect. But, if the drawFunc includes dynamic
elements (such as randomness or some “live” element), refreshing will alter its appearance.
A cleverly designed drawFunc can produce animated effects.
There is no need to construct and play a routine that repeatedly refreshes a UserView.
Instead, this process can be handled internally by making a UserView’s animate attribute
true. When animation is enabled, a UserView automatically refreshes itself at a frequency
determined by its frameRate, which defaults to 60 frames per second. This value cannot be
made arbitrarily high and will be constrained by your computer screen’s refresh rate. In many
cases, a frame rate between 20 and 30 provides an acceptable sense of motion.
By default, when a UserView is refreshed, it will clear itself before redrawing. This be-
havior can be changed by setting clearOnRefresh to false. When false, the UserView will
draw each frame on top of the previous frame, producing an accumulation effect. By drawing
a semi-transparent rectangle over the entire UserView at the start of the drawFunc, we can
produce a “visual delay” effect (see Code Example 8.12) in which moving elements appear to
leave a trail behind them.
(
var win, uv, inc = 0;
win = Window("Tunnel Vision", Rect(100, 100, 400, 400)).front;
uv = UserView(win, win.view.bounds)
.background_(Color.black)
.drawFunc_({ |v|
// draw transparency layer
Pen.fillColor_(Color.gray(0, 0.05))
.addRect(v.bounds)
.fill;
Bear in mind that SC is not optimized for visuals in the same way it is optimized for audio.
A looping process that calls screen updates may strain your computer’s CPU, significantly so
if it performs intensive calculations and/or has a relatively high frame rate. It’s usually best to
have a somewhat conservative attitude toward animated visuals in SC, rather than an enthusi-
astically decorative one! To conclude this chapter, Companion Code 8.6 creates an animated
spectrum visualizer that responds to sound.
PA R T III
LARGE-SCALE PROJECTS
On an individual basis, the topics presented throughout Parts I and II should be relatively ac-
cessible and learnable for a newer SC user. With a little dedication and practice, you will soon
start to feel increasingly comfortable creating and adding SynthDefs, working with buffers,
playing Synths and Pbinds, and so on. Putting all of these elements together into a coherent
and functional large-scale structure, however, often poses unique and unexpected challenges
to the user, sometimes manifesting confounding problems with elusive solutions. The beauty
(and perhaps, the curse) of SC is that it is a sandbox-style environment, with few limitations
placed on the user, and no obvious guideposts that direct the user toward a particular work-
flow. These final chapters seek to address these challenges by introducing tips and strategies
for building an organized performance structure from individual sounds and ideas, and
examining specific forms these projects might take.
CHAPTER 9
9.1 Overview
SC naturally invites a line-by-line or chunk-by-chunk approach to working with sound. If
you’re experimenting, sketching out new ideas, or just learning the basics, this type of interac-
tion is advantageous, as it allows us to express ideas and hear the results with ease. Nearly all
the previous examples in this book are split into separate chunks; for example, one line of code
boots the server, a second block reads audio files into buffers, another block adds SynthDefs,
and so on. However, as your ideas grow and mature, you may find yourself seeking to com-
bine them into a more robust and unified structure that can be seamlessly rehearsed, modi-
fied, performed, and debugged. A scattered collection of code snippets can certainly get the
job done, and in fact, there is a pair of keyboard shortcuts ([cmd]+[left square bracket] and
[cmd]+[right square bracket]), which navigate up and down through parenthetically-enclosed
code blocks. However, the chunk-by-chunk approach can also be unwieldy, time-consuming,
and prone to errors. It’s arguably preferable to write a program that can be activated with a
single keystroke, and which includes intuitive mechanisms for adjustment, navigation, and
resetting.
It should be noted that not all project types will rely on SC as a real-time performance
vehicle. Some projects may only need its signal-generating and signal-processing capabilities.
A common example is the use of SC as a “rendering farm” for sonic material (making use of
real-time recording features discussed in Section 2.9.6), while using multitrack software for
assembly, mixing, and fine-tuning. This is a perfectly sensible way to use SC, particularly for
fixed-media compositions, but it forgoes SC’s rich library of algorithmic tools for interactivity,
indeterminacy, and other dynamic mechanisms.
All things considered, when tackling a big project in SC, it’s a good idea to have a plan.
This is not always possible (and experimentation is part of the fun), but even a partially
formed plan can supply crucial guidance on the path forward. For example, how will the
musical material progress through time? Will the music advance autonomously and deter-
ministically, along a predetermined timeline? Or will a hardware interface (the spacebar, a
MIDI controller) be used to advance from one moment to the next? Or is the order of musical
actions indeterminate, with some device for dynamic interaction? Will a GUI be helpful to
display information during performance? Plunging ahead into the code-void without answers
to these types of questions is possible but risks the need for major changes and significant
backtracking later.
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.003.0009
252 SuperCollider for the Creative Musician
In recognition of the many divergent paths a large-scale SC project might take, this
chapter focuses on issues that are widely applicable to large-scale performance structures,
regardless of finer details. Specifically, this chapter focuses on the importance of order of ex-
ecution, which in this context means the sequence in which setup- and performance-related
actions must be taken. This concept is relevant in all programming languages; when a pro-
gram is compiled, an interpreter parses the code in the order it is written. Variables must be
declared before they can be used, memory must be allocated before data can be stored there,
and functions/routines must be defined before they can be executed. In SC, which exists as
a client-server duo communicating via OSC, the order in which setup actions take place is
perhaps even more important, and examples of pitfalls are plenty: a Buffer must be allocated
before samples can be recorded into it, a SynthDef must be fully added before a corresponding
Synth can be spawned, and of course, the audio server must be booted before any of this can
happen.
9.2 waitForBoot
If one of our chief goals is to be able to run an arbitrarily complex sound-generating program
with a single keystroke, then a good first step is to circumvent the inherently two-step process
of (1) booting the server and (2) creating sound after booting is complete. Sound-generating
code can only be called after the booting process has completely finished, and a direct attempt
to bundle these two actions into a single chunk of code will fail (see Code Example 9.1), a bit
like pressing the accelerator pedal in a car at the instant the ignition begins to turn, but before
the engine is actually running.
(
s.boot;
{PinkNoise.ar(0.2 ! 2) * XLine.kr(1, 0.001, 2, doneAction: 2)}.play;
)
The essence of the problem is that the server cannot receive commands until booting is
complete, which requires a variable amount of time, usually at least a second or two. The
language, generally ignorant of the server’s status, evaluates these two expressions with
virtually no time between them. The play method, seeing that the server is not booted,
posts a warning, but there is no inherent mechanism for delaying the second expression
until the time is right. Instead, the pink noise function is simply not received by the
server.
The waitForBoot method, demonstrated in Code Example 9.2, provides a solution. The
method is applied to the server and given a function containing arbitrary code. This method
will boot the server and evaluate its function when booting is complete.
C h a p t er 9: C o n si d er at i o n s fo r L a r g e-S ca l e Pr o j ects 253
(
s.waitForBoot({
{PinkNoise.ar(0.2 ! 2) * XLine.kr(1, 0.001, 2, doneAction: 2)}.play;
});
)
(
s.waitForBoot({
SynthDef(\tone_000, {
var sig = SinOsc.ar([350, 353], mul: 0.2);
sig = sig * XLine.kr(1, 0.0001, 2, doneAction: 2);
Out.ar(0, sig);
}).add;
Synth(\tone_000);
});
)
254 SuperCollider for the Creative Musician
When the language tries to create the Synth, the server has not yet finished building
the SynthDef, and produces a “SynthDef not found” error. On second evaluation, the
code in Code Example 9.3 will work properly, because enough time will have passed to
allow the SynthDef-building process to finish. This issue can be a common source of
confusion, resulting in code that always fails on the first try, but works fine on subse-
quent tries.
This example highlights the difference between synchronous and asynchronous
commands, discussed in a guide file titled “Synchronous and Asynchronous Execution.”
In the world of digital audio, certain actions must occur with a high level of timing preci-
sion, such as processing and playing audio signals. Without sample-accurate timing, audio
samples get dropped during calculation, producing crackles, pops, and other unacceptable
glitches. These time-sensitive actions are referred to as “synchronous” in SC and receive
the highest scheduling priority. Asynchronous actions, on the other hand, are those that
require an indeterminate amount of time to complete, and which generally do not re-
quire precise timing or high scheduling priority, such as adding a SynthDef or allocating
a Buffer.
The problem of waiting for the appropriate duration while asynchronous tasks are un-
derway is solved with the sync method, demonstrated in Code Example 9.4. When the
language encounters s.sync, it sends a message to the server, asking it to report back when
all of its ongoing asynchronous commands are complete. When the server replies with this
confirmation, the language then proceeds to evaluate code that occurs after the sync mes-
sage. Note that although the SynthDef code remains the same in Code Example 9.4, its
name has been changed so that the server interprets it as a brand new SynthDef that has
not yet been added.
(
s.waitForBoot({
SynthDef(\tone_001, {
var sig = SinOsc.ar([350, 353], mul: 0.2);
sig = sig * XLine.kr(1, 0.0001, 2, doneAction: 2);
Out.ar(0, sig);
}).add;
s.sync;
Synth(\tone_001);
});
)
C h a p t er 9: C o n si d er at i o n s fo r L a r g e-S ca l e Pr o j ects 255
Because a sync message involves suspending and resuming code evaluation in the language, it
can only be called from within a routine, or inside a method call (such as waitForBoot) that
implicitly creates a routine. Thus, the code in Code Example 9.5 will fail, but will succeed if
enclosed in a routine and played.
(
// audio server assumed to be already booted
SynthDef(\tone_002, {
var sig = SinOsc.ar([350, 353], mul: 0.2);
sig = sig * XLine.kr(1, 0.0001, 2, doneAction: 2);
Out.ar(0, sig);
}).add;
Synth(\tone_002);
)
Liberal usage of sync messages is generally harmless but does not provide any benefits. For
example, if adding several SynthDefs to the server, there is no need to sync between each pair;
a SynthDef is an independent entity, whose existence does not rely on the existence of other
SynthDefs. Similarly, there’s is no need to sync between a block of buffer allocations and a
block of SynthDefs, since these two processes do not (or at least should not) depend on each
other. Generally, a sync message is only necessary before running code that depends on the
completion of a previous asynchronous server command.
not a huge chore in this specific case, this back-and-forth becomes tedious in a more complex
project.
(
s.newBusAllocators;
~bus = Bus.audio(s, 2);
s.waitForBoot({
SynthDef(\source, { |out = 0|
var sig, env, freq, trig;
trig = Trig.kr(Dust.kr(4), 0.1);
env = EnvGen.kr(Env.perc(0.001, 0.08), trig);
freq = TExpRand.kr(200, 1500, trig);
sig = SinOsc.ar(freq ! 2, mul: 0.2);
sig = sig * env;
Out.ar(out, sig);
}).add;
s.sync;
ServerBoot, ServerTree, and ServerQuit are classes that allow automation of server-related
tasks. Each class is essentially a repository where specific action functions can be registered,
and each class evaluates its registered actions when the audio server enters a particular state.
ServerBoot evaluates registered actions when the server boots, ServerQuit does the same
when the server quits, and ServerTree evaluates its actions when the node tree is reinitialized,
that is, when all nodes are wiped via [cmd]+[period] or by evaluating s.freeAll. Actions are
C h a p t er 9: C o n si d er at i o n s fo r L a r g e-S ca l e Pr o j ects 257
registered by encapsulating the desired code in a function, and adding the function to the
appropriate repository. Code Example 9.7 demonstrates a simple example of making SC say
“good-bye” whenever the server quits.
(
~quitMessage = {
" ***************** ".postln;
" *** good-bye! *** ".postln;
" ***************** ".postln;
};
ServerQuit.add(~quitMessage);
)
s.boot;
A specific action can be removed from a repository with the remove method:
ServerQuit.remove(~quitMessage);
Or, all registered actions can be removed from a repository with removeAll:
ServerQuit.removeAll;
TIP.RAND(); R
ISKS OF REMOVING ALL ACTIONS
FROM A SERVER ACTION REPOSITORY
Most of the time, calling removeAll on a repository class will not disrupt your work-
flow in a noticeable way. However, when SC is launched, these three repository classes
may have one or more functions that are automatically attached to them. At the time
of writing, ServerBoot includes automated actions that allow the spectrum analyzer
(FreqScope.new) and Node proxies (discussed in Chapter 12) to function properly.
If you evaluate ServerBoot.removeAll and reboot the server, you’ll find that these
objects no longer work correctly. If you ever need to reset one or more repositories to
their default state(s), the simplest way to do so is to recompile the SC class library,
which can be done with [cmd]+[shift]+[L]. Alternatively, you can avoid the use of
removeAll and instead remove custom actions individually.
[cmd]+[period] will cause the message to be posted several times. For functions that only post
text, duplicate registrations clog the post window, but don’t do any real harm. However, if
these functions contain audio-specific code, duplicate registrations can have all sorts of unex-
pected and undesirable effects. A safer approach is to remove all actions before re-evaluating/
re-registering, as depicted in Code Example 9.9.
(
s.waitForBoot({
~treeMessage = {"Server tree cleared".postln};
ServerTree.add(~treeMessage);
});
)
(
s.waitForBoot({
ServerTree.removeAll;
~treeMessage = {"Server tree cleared".postln};
ServerTree.add(~treeMessage);
});
)
Registered repository actions can be manually performed by calling run on the repository class:
ServerTree.run;
Calling run only evaluates registered actions and spoofs the normal triggering mechanism
(e.g., evaluating ServerBoot.run does not actually boot the server—it only evaluates its reg-
istered functions).
Armed with these new tools, Code Example 9.10 improves the audio example in Code
Example 9.6. Specifically, we can automate the creation of the reverb Synth by adding an
appropriate function to ServerTree. To avoid accidental double-registrations, we also define a
cleanup function that wipes Nodes and removes all registered actions. This cleanup function
is called once when we first run the code, and also whenever we quit the server.
(
s.newBusAllocators;
~bus = Bus.audio(s, 2);
~cleanup = {
s.freeAll;
ServerBoot.removeAll;
ServerTree.removeAll;
ServerQuit.removeAll;
};
~cleanup.();
ServerQuit.add(~cleanup);
s.waitForBoot({
SynthDef(\source, { |out = 0|
var sig, env, freq, trig;
trig = Trig.kr(Dust.kr(4), 0.1);
env = EnvGen.kr(Env.perc(0.001, 0.08), trig);
freq = TExpRand.kr(200, 1500, trig);
sig = SinOsc.ar(freq ! 2, mul: 0.2);
sig = sig * env;
Out.ar(out, sig);
}).add;
s.sync;
In this improved version, pressing [cmd]+[period] no longer requires jumping back to our
setup code. Instead, a new reverb Synth automatically appears to take the place of its prede-
cessor, allowing us to focus our creative attention exclusively on our sound-making code. At
the same time, we’ve taken appropriate precautions with our cleanup code, so that if we do
re-evaluate our setup code (intentionally or unintentionally), it does not create any technical
problems. When we’re finished, we can quit the server, which removes all registered actions.
useful if you leave unique combinations of characters as “bookmarks” for yourself). Similarly,
the “split” feature allows multiple parts of the same document to be displayed simultaneously
(see Figure 9.1). Still, even with the advantages of these features, there are limits to what the in-
terpreter can handle. If you try to evaluate an enormous block of code that contains thousands
of nested functions, you may even encounter the rare “selector table too big” error message.
FIGURE 9.1 Viewing two parts of the same code file in the IDE using a split view. Split options are
available in the “View” drop-down menu.
For large or even medium-sized projects, partitioning work into separate code files has
distinct advantages. Modularization yields smaller files that focus on specific tasks. These
files are generally easier to read, understand, and debug. Once a sub-file is functional and
complete, you can set it aside and forget about it. In some cases, some of your sub-files may
be general-purpose enough to be incorporated into other projects (for example, a sub-file that
contains a library of your favorite SynthDefs).
Code in a saved .scd file can be evaluated by calling load on a string that represents the
absolute path to that file. If your code files are all stored in the same directory, the process is
even simpler: you can call loadRelative on a string that contains only the name and exten-
sion of the file you want to run. The loadRelative method requires that the primary file and
the external file have previously been saved somewhere on your computer. Otherwise, the files
don’t technically exist yet, and SC will have no idea where to look for them.
You can create a simple demonstration of loadRelative as follows. First, save a SC docu-
ment called “external.scd” that contains the following code:
262 SuperCollider for the Creative Musician
s.waitForBoot({
{
var sig = SinOsc.ar([500, 503], mul: 0.2);
sig = sig * Env.perc.kr(2);
}.play;
});
Then, save a second SC file named “main.scd” (or some other name) in the same location.
In the main file, evaluate the following line of code. The server should boot, and you should
hear a tone.
"external.scd".loadRelative;
To conclude this chapter, Companion Code 9.1 brings several of these concepts together and
presents a general-purpose template for large-scale SC projects, which can be adapted for a
variety of purposes.
CHAPTER 10
AN EVENT-BASED STRUCTURE
10.1 Overview
One advantage of creating music with SC is its potential to introduce randomness and inde-
terminacy into a composition, enabling musical expression that is difficult to achieve in other
contexts. In other words, choices about certain musical elements can be driven by algorithms,
without having predetermined orders or durations, allowing the composer to take a higher-
level approach to music-making. However, algorithmic composition introduces new layers of
open-endedness that can prove challenging for composers who are new to this kind of approach.
As an entryway into algorithmic composition, this chapter focuses on structuring a com-
position that follows a chronological sequence of musical events. We begin with structures
that are completely deterministic, that is, those that produce the exact same result each time.
Toward the end of the chapter, we add some indeterminate elements that leverage SC’s algo-
rithmic capabilities, while maintaining an overall performance structure that remains ordered.
In practice, a musical event may be treated as a singular unit but may comprise several dif-
ferent simultaneous events from this list. For example, an event might create ten or 100 new
sounds at once, or an event might simultaneously terminate one sound while creating another.
Further still, an event might initiate a timed sequence of sounds whose onsets are separated in
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.003.0010
264 SuperCollider for the Creative Musician
time (such as an arpeggiated chord). Even though multiple sounds are produced, we can still
conceptualize the action as a singular event.
So how are these types of musical events represented in code? A Synth is arguably the most
basic unit of musical expression. When a new Synth is created, a signal-calculating unit is born
on the server. Other times, we play a Pbind to generate a sequence of Synths. Thus, categories
(1) and (4) can be translated into the creation of Synths and/or the playing of Pbinds. For
categories (2) and (5), a set message is a logical translation, which allows manipulation of
Synth arguments in real-time. Alternatively, we might iterate over an array of Synths, or issue
a set message to a Group, to influence multiple Synths at once. In the context of Pbind,
we might call upon a Pdefn to modify some aspect of an active EventStreamPlayer. Lastly,
categories (3) and (6) might translate into using free to destroy one or more Synths, or more
commonly, using a set message to zero an envelope gate to create a smooth fade, or we might
update a Pdefn to gradually fade the amplitude values of an EventStreamPlayer.
Musical events can also be categorized as “one-shots” or “sustaining” events. In a one-shot
event, all generative elements have a finite lifespan and terminate themselves automatically,
usually as a result of a fixed-duration envelope or finite-length pattern. The Synths and Pbinds
in a one-shot event typically do not need to be stored in named variables, because they are
autonomous and there is rarely a need to reference them after creation. Sustaining events, on
the other hand, have an indefinite-length existence, and rely on a future event for termination.
Unlike one-shots, sound-generating elements in sustaining events must rely on some storage
mechanism and naming scheme, so that they can be accessed and terminated later.
The first step toward building a chronological event structure is to express your musical
events as discrete chunks of code. In other words, determine what’s going to happen and
when, and type out the code that performs each step. In early stages of composing, this can
be as easy as putting each chunk of code in its own parenthetical enclosure, so that each event
can be manually evaluated with [cmd]+[return]. As an initial demonstration, consider the code
in Code Example 10.1, which represents a simple (and uninspired) event-based composition.
s.boot;
(
SynthDef(\sine, {
arg atk = 1, rel = 4, gate = 1,
freq = 300, freqLag = 1, amp = 0.1, out = 0;
var sig, env;
env = Env.asr(atk, 1, rel).kr(2, gate);
sig = SinOsc.ar(freq.lag(freqLag) + [0, 2]);
sig = sig * amp * env;
Out.ar(out, sig);
}).add;
C h a p t er 10: A n E v en t-Bas ed St r u ct u r e 265
SynthDef(\noise, {
arg atk = 1, rel = 4, gate = 1,
freq = 300, amp = 0.2, out = 0;
var sig, env;
env = Env.asr(atk, 1, rel).kr(2, gate);
sig = BPF.ar(PinkNoise.ar(1 ! 2), freq, 0.02, 7);
sig = sig * amp * env;
Out.ar(out, sig);
}).add;
)
(
// event 0: create sine synth
~sine0 = Synth(\sine, [freq: 60.midicps, amp: 0.05]);
)
(
// event 1: create two noise synths
~noise0 = Synth(\noise, [freq: 75.midicps, amp: 0.15]);
~noise1 = Synth(\noise, [freq: 53.midicps, amp: 0.3]);
)
(
// event 2: modify frequencies of noise synths
~noise0.set(\freq, 77.midicps);
~noise1.set(\freq, 56.midicps);
)
(
// event 3: modify frequency of sine synth
~sine0.set(\freq, 59.midicps);
)
(
// event 4: fade all synths
[~sine0, ~noise0, ~noise1].do({ |n| n.set(\gate, 0) });
)
Though devoid of brilliance and nuance, this example illustrates the essential technique of
discretizing a musical performance into individual actions. In practice, your events may likely
be substantially larger and more complex, involving dozens of Synths and/or Pbinds. Your
SynthDefs, too, may also be larger, more developed, and more numerous.
In this current form, our “composition” is performable, but involves some combination of
mouse clicking, scrolling, and/or keyboard shortcuts. We can improve the situation by loading
these event chunks into an ordered structure, explored in the next section.
266 SuperCollider for the Creative Musician
(
SynthDef(\simple, {
arg atk = 0.2, rel = 1, gate = 1,
freq = 300, freqLag = 1, amp = 0.1, out = 0;
var sig, env;
env = Env.linen(atk, 1, rel).kr(2);
sig = SinOsc.ar(freq.lag(freqLag) + [0, 2]) * amp * env;
Out.ar(out, sig);
}).add;
)
Code Example 10.3 shows a tempting but erroneous attempt to populate an array with
these two events. When the array is created, both sounds play immediately. An attempt
to access either item returns the Synth object, but no sound is produced. This behavior
is a byproduct of the separation between the language and the server. To create the array,
the language must interpret the items it contains. But, once the language interprets a
Synth, an OSC message to the server is automatically generated, and the Synth comes
into existence.
C h a p t er 10: A n E v en t-Bas ed St r u ct u r e 267
(
~events = [
Synth(\simple, [freq: 330, amp: 0.05]),
Synth(\simple, [freq: 290, amp: 0.05])
];
)
The solution is to wrap each event in a function, shown in Code Example 10.4. When the
interpreter encounters a function, it only sees that it is a function, but does not “look inside”
until the function is explicitly evaluated.
(
~events = [
{Synth(\simple, [freq: 330, amp: 0.05])},
{Synth(\simple, [freq: 290, amp: 0.05])}
];
)
~events[0]
.(); // play 0th event
If the array is large, we can avoid dealing with a long list of evaluations by creating a global
index and defining a separate function that evaluates the current event and increments the
index, demonstrated in Code Example 10.5. If the index is beyond the range of the array, ac-
cess attempts return nil, which can be harmlessly evaluated. The index can be manually reset
to zero to return to the beginning, or some other integer to jump to a point in the middle of
the sequence.
268 SuperCollider for the Creative Musician
(
~index = 0;
~events = [
{Synth(\simple, [freq: 330, amp: 0.05])},
{Synth(\simple, [freq: 290, amp: 0.05])},
{Synth(\simple, [freq: 420, amp: 0.05])},
{Synth(\simple, [freq: 400, amp: 0.05])}
];
~nextEvent = {
~events[~index].();
~index = ~index + 1;
};
)
Code Example 10.5 essentially reinvents the behaviors of next and reset in the context of
routines (introduced in Section 5.2), but without the ability to advance through events auto-
matically with precise timing. If precise timing is desired, we can bundle event durations into
the array and play a simple routine that retrieves these pieces of information from the array as
needed. In Code Example 10.6, each item in the event array is an array containing the event
function and a duration.
(
~index = 0;
~events = [
[{Synth(\simple, [freq: 330, amp: 0.05])}, 2],
[{Synth(\simple, [freq: 290, amp: 0.05])}, 0.5],
[{Synth(\simple, [freq: 420, amp: 0.05])}, 0.25],
[{Synth(\simple, [freq: 400, amp: 0.05])}, 0],
];
C h a p t er 10: A n E v en t-Bas ed St r u ct u r e 269
~seq = Routine({
~events.do({
~events[~index][0].();
~events[~index][1].wait;
~index = ~index + 1;
});
});
)
~seq.play;
As an exercise for the reader, consider converting the performance events in Code Example 10.1
into either the array or routine structures that appear in Code Examples 10.5 and 10.6.
(
~events = Dictionary()
.add(\play330sine -> {Synth(\simple, [freq: 330, amp: 0.05])})
.add(\play290sine -> {Synth(\simple, [freq: 290, amp: 0.05])});
)
~events[\play330sine].();
~events[\play290sine].();
The decision of whether to use a dictionary or array is a matter of taste, and perhaps also
dictated by the nature of the project; it depends on whether the familiarity of named events
outweighs the convenience of numerical ordering. Though the ability to name events is useful,
using a dictionary is somewhat more prone to human error (e.g., accidentally using the same
name twice). Arguably, a similar effect can be achieved with arrays by strategically placing
270 SuperCollider for the Creative Musician
comments at points throughout your code. Or, perhaps your memory is sharp enough to recall
events by number alone!
(
~events = Pseq([
{
~sine0 = Synth(\sine, [freq: 60.midicps, amp: 0.05])
},
{
~noise0 = Synth(\noise, [freq: 75.midicps, amp: 0.15]);
~noise1 = Synth(\noise, [freq: 53.midicps, amp: 0.3]);
},
{
~noise0.set(\freq, 77.midicps);
~noise1.set(\freq, 56.midicps);
},
{
~sine0.set(\freq, 59.midicps);
},
{
[~sine0, ~noise0, ~noise1].do({ |n| n.set(\gate, 0) });
}
], 1).asStream;
)
When using Pseq, we don’t need to maintain an event index, and instead merely need to call
next on the stream to retrieve the next function. The stream can also be reset at will. Once
the stream is reset, we can jump to a point along the event timeline by extracting a certain
number of events from the stream, but not evaluating them, as shown in Code Example 10.9.
C h a p t er 10: A n E v en t-Bas ed St r u ct u r e 271
(
~events.reset;
1.do({~events.next}); // retrieve the first event but do not evaluate
)
match the number of events, in which case numerical indexing is unhelpful. Therefore, we
use a dictionary for storing rehearsal cues, for the ability to give each cue a meaningful name.
(
~startAt = Dictionary()
.add(\event1 -> {
~events.reset;
1.do({~events.next});
~sine0 = Synth(\sine, [freq: 60.midicps, amp: 0.05]);
})
.add(\event2 -> {
~events.reset;
2.do({~events.next});
~sine0 = Synth(\sine, [freq: 60.midicps, amp: 0.05]);
~noise0 = Synth(\noise, [freq: 75.midicps, amp: 0.15]);
~noise1 = Synth(\noise, [freq: 53.midicps, amp: 0.3]);
})
.add(\event3 -> {
~events.reset;
3.do({~events.next});
~sine0 = Synth(\sine, [freq: 60.midicps, amp: 0.05]);
~noise0 = Synth(\noise, [freq: 77.midicps, amp: 0.15]);
~noise1 = Synth(\noise, [freq: 56.midicps, amp: 0.3]);
})
.add(\event4 -> {
~events.reset;
4.do({~events.next});
~sine0 = Synth(\sine, [freq: 59.midicps, amp: 0.05]);
~noise0 = Synth(\noise, [freq: 77.midicps, amp: 0.15]);
~noise1 = Synth(\noise, [freq: 56.midicps, amp: 0.3]);
});
)
C h a p t er 10: A n E v en t-Bas ed St r u ct u r e 273
~events.reset;
(
var scl = [0, 3, 5, 7, 9, 10];
var oct = [72, 72, 84, 84, 84, 96];
var notes = oct.collect({ |n| n + scl.choose});
~synths = ([60] ++ notes).collect({ |n|
// MIDI note 60 is prepended to avoid randomizing the bass note
Synth(\noise, [freq: n.midicps, amp: 0.15]);
});
)
(
Pfin(
exprand(3, 15).round, // creates between 3–15 synths
Pbind(
\instrument, \sine,
\dur, 0.07,
\scale, [0, 3, 5, 7, 9, 10],
\degree, Pbrown(10, 20, 3),
\sustain, 0.01,
\atk, 0.002,
\rel, 0.8,
\amp, 0.03
)
).play;
)
C h a p t er 10: A n E v en t-Bas ed St r u ct u r e 275
Randomness can also be implemented at a higher level, influencing the sequential arrange-
ment of events. In Code Example 10.13, a Pseq is nested inside the outermost Pseq to ran-
domize the number of tone burst patterns that are played before the filtered noise chord is
faded out. The noteworthy feature of this inner Pseq is that the repeats value is expressed
as a function, guaranteeing that the number of repeats will be uniquely generated each time
the sequence is performed. Using a similar technique, we can randomize the order in which
certain events occur. Code Example 10.14 uses Pshuf to randomize the order of the first two
events, so that either the tone bursts or the filtered noise chord may occur first.
(
~events = Pseq([
{
// create cluster chord
var scl = [0, 3, 5, 7, 9, 10];
var oct = [72, 72, 84, 84, 84, 96];
var notes = oct.collect({ |n| n + scl.choose});
~synths = ([60] ++ notes).collect({ |n|
Synth(\noise, [freq: n.midicps, amp: 0.15]);
});
},
~events.next.();
~events.reset;
(
~events = Pseq([
Pshuf([
{
var scl = [0, 3, 5, 7, 9, 10];
var oct = [72, 72, 84, 84, 84, 96];
var notes = oct.collect({ |n| n + scl.choose });
~synths = ([60] ++ notes).collect({ |n|
Synth(\noise, [freq: n.midicps, amp: 0.15]);
});
},
Pseq([
{
Pfin(
exprand(3, 15).round,
Pbind(
\instrument, \sine,
\dur, 0.07,
\degree, Pbrown(10, 20, 3),
\scale, [0, 3, 5, 7, 9, 10],
\sustain, 0.01,
\atk, 0.002,
\rel, 0.8,
\amp, 0.03
)
).play;
}
], {rrand(3, 5)})
], 1),
C h a p t er 10: A n E v en t-Bas ed St r u ct u r e 277
~events.next.();
~events.reset;
Companion Code 10.1 synthesizes these techniques for structuring event-based compositions
and, building upon foundational techniques from the previous chapter, presents a somewhat
larger and more complex event-based project.
Note
1 It should be noted that Dictionary is a distant superclass of Event. Dictionaries cannot be
“played” like Events, nor are they deeply intertwined with Patterns and Streams. In the context of
storing functions that perform arbitrary actions, however, these two classes are largely interchange-
able, with minor syntax differences. The reader may even find Events to be the more practical option.
In this section, Dictionary is only favored to avoid potential confusion between the abstract notion
of a musical “event” and the “Event” class.
CHAPTER 11
A STATE-BASED STRUCTURE
11.1 Overview
State-based composition represents a different way of thinking about musical structure.
Instead of relying on a predetermined timeline, we rely on a collection of sounds that can be
activated and deactivated at will, in any order or combination.
Imagine four sound-generating “modules” named A, B, C, and D. In this context, a
module might be a Synth, or an EventStreamPlayer created by playing a Pbind. You might
spontaneously decide to start a performance by activating modules B and D. After fifteen
seconds, you might activate module A, and, ten seconds later, deactivate module B. Module
C might remain inactive for the entire duration of the performance (an imagined “score” for
this performance is depicted in Figure 11.1). Performed on a different day, your choices might
be entirely different. The term “state-based” refers to the fact that at any point during a per-
formance, the program is in a particular “state.” States themselves can involve sounds that are
static or dynamic, dense or sparse, regular or unpredictable. With a large collection of sound-
generating modules, dozens or even hundreds of combinations are possible.
FIGURE 11.1 A partial imagined “score” for a state-based composition involving four sound-
generating modules.
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.003.0011
C h a p t er 11: A Stat e-Bas ed St r u ct u r e 279
as well as duration; listeners can engage by coming and going as they please, by moving
around a performance space, or simply listening with an “in-the-moment” mentality.
Because states can theoretically occur in any order/combination, it’s important to design
state-change actions to avoid conflicting or contradictory behavior. In other words, state-
change actions should be state-aware on some level. For example, an attempt to activate sound
module B should have no effect if module B is already playing. Similarly, an attempt to modify
the sound of a module that is not playing should be ignored.
A timeline is a bedrock design feature of DAWs and waveform editing software. Creating
music without one can feel disorienting! You might feel adrift in a sea of timeless musical
possibilities. To help bridge the gap, it can be helpful to first envision a rough order in which
your states will occur—maybe even create a simple score for yourself to follow—but then
venture outside the boundaries of your plan once you start refining and rehearsing. With the
flexibility to dynamically reorder your ideas, you might stumble upon unexpectedly inter-
esting sequences and combinations.
(
s.waitForBoot({
SynthDef(\pulse_noise, { |gate = 0|
var sig, env;
env = Env.asr(5, 1, 5, [2, -2]).kr(1, gate);
sig = BPF.ar(PinkNoise.ar(1 ! 2), 1600, 0.01, 3);
sig = sig * env * LFPulse.kr(8, width: 0.2);
Out.ar(0, sig);
}).add;
SynthDef(\beating_tone, { |gate = 0|
var sig, env;
env = Env.asr(5, 1, 5, [2, -2]).kr(1, gate);
sig = SinOsc.ar([250, 251], mul: 0.1) * env;
Out.ar(0, sig);
}).add;
SynthDef(\crackle, { |gate = 0|
var sig, env;
env = Env.asr(5, 1, 5, [2, -2]).kr(1, gate);
sig = Dust.ar(8 ! 2, 0.2) * env;
Out.ar(0, sig);
}).add;
s.sync;
~a = Synth.newPaused(\pulse_noise);
~b = Synth.newPaused(\beating_tone);
~c = Synth.newPaused(\crackle);
});
)
We can reduce the performance code in Code Example 11.1 from six lines to three. Instead of
using a pair of statements for on/off functionality, we can define a toggle function that turns
a module on if it is off, and vice-versa. This improvement makes use of the Event class, which
allows us to bundle additional information with each Synth. Specifically, we include a Boolean
for tracking each Synth’s paused status. After running the setup code in Code Example 11.1,
Code Example 11.2 can be used as performance code. The three toggle statements at the
bottom of Code Example 11.2 can be run in any order/combination.
(
~a = (synth: Synth.newPaused(\pulse_noise), running: false);
~b = (synth: Synth.newPaused(\beating_tone), running: false);
~c = (synth: Synth.newPaused(\crackle), running: false);
~toggle = { |module|
if(
module[\running],
{module[\synth].set(\gate, 0)},
{module[\synth].set(\gate, 1).run(true)}
);
module[\running] = module[\running].not;
};
)
~toggle.(~a);
~toggle.(~b);
~toggle.(~c);
This approach can be applied to generative SynthDefs of all different kinds, as long as they
include a gated, pausing envelope that affects the overall amplitude of the output signal.
The SynthDefs in this section include only a gate argument, but other arguments can be
added if desired. A Synth can still receive a set message while paused, but the message
will not take effect until it is unpaused. Code Example 11.3 provides a brief example based
on a SynthDef from Code Example 11.1, which adds a frequency argument. Though not
demonstrated here, adding arguments for envelope attack and release times can be helpful
to control transition timings during state changes (in this example, they are hard-coded as
five seconds each).
282 SuperCollider for the Creative Musician
(
SynthDef(\beating_tone, { |freq = 250, freqlag = 0, gate = 0|
var sig, env;
env = Env.asr(5, 1, 5, [2, -2]).kr(1, gate);
sig = SinOsc.ar(freq.lag(freqlag) + [0, 1], mul: 0.1) * env;
Out.ar(0, sig);
}).add;
)
~toggle.(~b); // fade in
Adding lots of SynthDef arguments is fine, though perhaps not totally in the “spirit” of state-
based composition. Having many individually controllable parameters invites a mindset
oriented toward precise individual adjustments, and also can lead to large, unwieldy perfor-
mance interfaces. It is arguably better to create generative SynthDefs that exhibit complex and
evolving behaviors on their own, without reliance on manipulation from an external source.
For example, if you find yourself relying heavily on set messages, consider replacing those
arguments with control rate UGens that produce similar effects. Ideas for complex generative
materials are presented throughout Chapters 3 and 4. This being said, there are no hard rules
in state-based composition, and these are only suggested guidelines. Your sounds are yours
to craft!
When a state change occurs from a previous state, the logic is as follows: if a module is in-
cluded in the current state but not in the previous state, turn it on. If a module is not included
in the current state, but was specified in the previous state, switch it off.
Code Example 11.4 creates five different generative SynthDefs and five paused Synths.
Synths are stored in an Event, at keys corresponding to our A–E naming scheme. Each state is
represented as an array of keys corresponding to modules that should be activated when that
state is recalled. Finally, we build a function that handles state changes. It accepts an integer
that represents a state index, and, in response, it updates current/previous state information
and relays gate/run messages to Synths. The three ~playState lines near the bottom of the
example can be evaluated in any order to move from one state to another.
(
s.waitForBoot({
SynthDef(\pulse_noise, { |gate = 0|
var sig, env;
env = Env.asr(5, 1, 5, [2, -2]).kr(1, gate);
sig = BPF.ar(PinkNoise.ar(1 ! 2), 1600, 0.005, 4);
sig = sig * env * LFPulse.kr(8, width: 0.2);
Out.ar(0, sig);
}).add;
SynthDef(\beating_tone, { |gate = 0|
var sig, env;
env = Env.asr(5, 1, 5, [2, -2]).kr(1, gate);
sig = SinOsc.ar([250, 251], mul: 0.08) * env;
Out.ar(0, sig);
}).add;
SynthDef(\crackle, { |gate = 0|
var sig, env;
env = Env.asr(5, 1, 5, [2, -2]).kr(1, gate);
sig = Dust.ar(8 ! 2, 0.2) * env;
Out.ar(0, sig);
}).add;
SynthDef(\buzzy, { |gate = 0|
var sig, env, fb;
env = Env.asr(5, 1, 5, [2, -2]).kr(1, gate);
fb = LFNoise1.kr(3 ! 2).range(1.9, 2.1);
sig = SinOscFB.ar([60, 60.2], feedback: fb, mul: 0.008) * env;
284 SuperCollider for the Creative Musician
Out.ar(0, sig);
}).add;
SynthDef(\windy, { |gate = 0|
var sig, env, cf;
env = Env.asr(5, 1, 5, [2, -2]).kr(1, gate);
cf = LFNoise1.kr(0.2 ! 2).range(400, 800);
sig = WhiteNoise.ar(0.2 ! 2);
sig = BPF.ar(sig, cf, 0.05) * env;
Out.ar(0, sig);
}).add;
s.sync;
~modules = (
a: Synth.newPaused(\pulse_noise),
b: Synth.newPaused(\beating_tone),
c: Synth.newPaused(\crackle),
d: Synth.newPaused(\buzzy),
e: Synth.newPaused(\windy)
);
~states = [
[\a, \c, \e],
[\b, \c, \d],
[\a, \b],
[] // state 3 (everything off)
];
~currState = [];
~playState = { |selection|
~prevState = ~currState;
~currState = ~states[selection];
~modules.keys.do({ |module|
if(
~currState.includes(module) &&
~prevState.includes(module).not,
{~modules[module].set(\gate, 1).run(true)}
);
if(
~currState.includes(module).not &&
~prevState.includes(module),
{~modules[module].set(\gate, 0)}
);
});
// return the state so it appears in the post window
~states[selection];
};
});
)
C h a p t er 11: A Stat e-Bas ed St r u ct u r e 285
~playState.(0);
~playState.(1);
~playState.(2);
(
Routine({// clean up when finished:
~playState.(3);
5.wait;
~modules.do({ |n| n.free });
}).play;
)
The code in Code Example 11.4 is meant to demonstrate an efficient and scalable management
strategy for a state-based composition with composite states. Our single-character module
naming scheme (A–E) is chosen for brevity, but more descriptive names can be substituted.
Also, bear in mind that the number of SynthDefs need not match the number of modules
(a well-designed SynthDef can spawn multiple Synths that sound quite different from each
other). Adapting this example to a larger and more interesting SynthDef collection is left as
an open exercise for the reader.
• doneAction: 1 is useful for pausing individual Synths, but a poor choice when used
with Pbind, because it results in an accumulation of Synths that are paused but never
destroyed. Even though paused Synths do not demand signal calculation resources, there
is an upper limit to the number of nodes allowed on the server at one time, accessible via
s.options.maxNodes. So, the hard-coded doneAction is replaced with an argument, to
provide flexibility.
• Each Pbind is followed by a slightly unusual-looking “.play.pause” message
combination. The play message returns an EventStreamPlayer, which is immediately put
into a paused state.
• Each Pbind amplitude value is scaled by a Pdefn. Each Pdefn name is identical to the
module it belongs to, which helps make our ~playState function slightly more concise.
For simplicity and consistency, Pdefn values are treated as normalized scalars, ranging
between zero and one.
286 SuperCollider for the Creative Musician
• When a stream reaches its end, it does not produce further output unless reset. When we
fade an EventStreamPlayer using Pseg, the amplitude value stream—and by extension,
the EventStreamPlayer itself—becomes finite. As a result, once the fade-out is complete,
we must reset the EventStreamPlayer before it will play again.
• As a small enhancement, the ~playState function can now accept a crossfade duration
as its second argument, allowing us to specify the duration of each state change.
(
s.waitForBoot({
SynthDef(\beating_tone, {
arg freq = 250, amp = 0.1, atk = 0.02,
rel = 0.2, gate = 1, done = 2;
var sig, env;
env = Env.asr(atk, 1, rel, [2, -2]).kr(done, gate);
sig = SinOsc.ar(freq + {Rand(-2.0, 2.0)}.dup(2)) * env * amp;
Out.ar(0, sig);
}).add;
s.sync;
~modules = (
a: Pbind(
\instrument, \beating_tone,
\dur, 0.2,
\degree, Prand([-9, -5, -1], inf),
\atk, 1,
\sustain, 1,
\rel, 1,
\done, 2,
\amp, 0.04 * Pdefn(\a, 1),
).play.pause,
b: Pbind(
\instrument, \beating_tone,
\dur, Pexprand(0.001, 1),
\freq, Phprand(3000, 8000),
\atk, 0.001,
\sustain, Pexprand(0.002, 0.015),
\rel, 0.001,
\amp, 0.015 * Pdefn(\b, 1),
).play.pause,
c: Pbind(
\instrument, \beating_tone,
C h a p t er 11: A Stat e-Bas ed St r u ct u r e 287
\dur, 0.5,
\degree, 10,
\atk, 0.04,
\sustain, 0.05,
\rel, 0.3,
\amp, Pseq(Array.geom(7, 0.1, 0.5), inf) * Pdefn(\c, 1),
).play.pause
);
~states = [
[\a, \b],
[\a, \c],
[\b, \c],
[]
];
~currentState = [];
~playState = { |selection, fadedur = 5|
~prevState = ~currState;
~currState = ~states[selection];
~modules.keys.do({ |module|
if(
~currState.includes(module) &&
~prevState.includes(module).not,
{ // fade in
Pdefn(
module,
Pseq([
Env([0, 1], [fadedur]).asPseg,
Pseq([1],inf)
], 1)
);
~modules[module].reset.play;
}
);
if(
~currState.includes(module).not &&
~prevState.includes(module),
// fade out
{Pdefn(module, Env([1, 0], [fadedur]).asPseg)}
);
});
// return the state so it appears in the post window
~states[selection];
}
});
)
288 SuperCollider for the Creative Musician
If you switch back and forth between certain states quickly enough, you will see an “al-
ready playing” message appear in the post window. This occurs because SC attempts to play
an EventStreamPlayer that is still actively fading out from a previous state and thus still
considered to be playing. These messages are harmless and can be ignored.
The generative modules in this example are all EventStreamPlayers, but it is pos-
sible to compose a state-based work involving a combination of EventStreamPlayers and
Synths in your module collection. In this case, the only additional requirement is that
the ~playState function must include an additional layer of conditional logic to separate
EventStreamPlayers from Synths, because they require different types of commands for
fading in and out.
(
s.waitForBoot({
SynthDef(\crackle_burst, { |length = 3|
var sig, env;
env = Env.perc(0.01, length, curve: [0, -4]).kr(2);
sig = Dust.ar(XLine.kr(30, 1, length) ! 2, 0.6) * env;
sig = LPF.ar(sig, ExpRand(2000, 6000));
Out.ar(0, sig);
}).add;
SynthDef(\buzzy_burst, {
arg atk = 0.01, rel = 0.5, freq = 80, fb = 2, amp = 1;
C h a p t er 11: A Stat e-Bas ed St r u ct u r e 289
s.sync;
~oneShots = (
a: { Synth(\crackle_burst, [length: rrand(3, 6)]) },
b: {
[55, 58, 60, 65, 69].do({ |n|
Synth(\buzzy_burst, [
atk: 3,
rel: 3,
freq: n.midicps,
fb: rrand(1.6, 2.2),
amp: 1/3
]);
});
},
c: {
Pfin({rrand(9, 30)}, Pbind(
\instrument, \buzzy_burst,
\dur, Pexprand(0.01, 0.1),
\amp, Pgeom(1, {rrand(-1.5, -0.7).dbamp}),
\midinote, Pbrown(35, 110.0, 7),
\fb, Pwhite(1.5, 2.6)
)).play;
}
);
});
)
~oneShots[\a].();
~oneShots[\b].();
~oneShots[\c].();
These one-shot functions are static in the sense that we cannot influence their sound, only
trigger them. If we want to pass custom argument values to a one-shot in order to modify its
behavior at runtime, we can declare one or more arguments at the top of the corresponding
one-shot function (stored in the ~oneShots Event) and provide specific values when the
one-shot is called. For example, Code Example 11.7 provides an alternate version of the
first one-shot in the previous example, which allows the duration of the crackle burst to be
customized.
290 SuperCollider for the Creative Musician
~oneShots[\a].(1.5); // short
~oneShots[\a].(8); // long
(
ServerTree.removeAll;
s.newBusAllocators;
~bus = Bus.audio(s, 2);
s.waitForBoot({
SynthDef(\pulse_noise, {
arg out = 0, auxout = 0, auxamp = 0, lag = 1, gate = 0;
var sig, env;
env = Env.asr(5, 1, 5, [2, -2]).kr(1, gate);
sig = BPF.ar(PinkNoise.ar(1 ! 2), 1600, 0.005, 4);
sig = sig * env * LFPulse.kr(8, width: 0.2);
Out.ar(out, sig);
Out.ar(auxout, sig * auxamp.varlag(lag));
}).add;
SynthDef(\crackle, {
arg out = 0, auxout = 0, auxamp = 0, lag = 1, gate = 0;
var sig, env;
env = Env.asr(5, 1, 5, [2, -2]).kr(1, gate);
sig = Decay2.ar(Dust.ar(8 ! 2), 0.0005, 0.005, SinOsc.ar(900));
sig = Splay.ar(sig, 0.6) * env * 0.2;
Out.ar(0, sig);
Out.ar(auxout, sig * auxamp.varlag(lag));
}).add;
SynthDef(\reverb, { |in = 0, out = 0|
var sig, fx;
sig = In.ar(in, 2);
sig = FreeVerb2.ar(sig[0]
, sig[1], 1, 0.85);
Out.ar(out, sig);
}).add;
s.sync;
~makeNodes = {
~srcGroup = Group();
~reverb = Synth.after(~srcGroup, \reverb, [in: ~bus]);
~modules = (
a: Synth.newPaused(\pulse_noise, [auxout: ~bus], ~srcGroup),
b: Synth.newPaused(\crackle, [auxout: ~bus], ~srcGroup),
);
~currState = [];
};
292 SuperCollider for the Creative Musician
ServerTree.add(~makeNodes);
ServerTree.run;
~states = [
[\a, \b],
[]
];
~playState = { |selection|
~prevState = ~currState;
~currState = ~states[selection];
~modules.keys.do({ |module|
if(
~currState.includes(module) &&
~prevState.includes(module).not,
{~modules[module].set(\gate, 1).run(true)}
);
if(
~currState.includes(module).not &&
~prevState.includes(module),
{~modules[module].set(\gate, 0)}
);
~states[selection];
});
}
});
)
(
// fade out module 'a' reverb send, fade in module 'b' reverb send
~modules[\a].set(\auxamp, 0, \lag, 5);
~modules[\b].set(\auxamp, -3.dbamp, \lag, 5);
)
(
Routine({// clean up when finished
~playState.(1); // everything off
5.wait; ~modules.do({ |n| n.free});
3.wait; ~reverb.free;
}).play;
)
~states = [
[\tone, \grains, \noise], // state 0
[\crunchy, \grains, \pad], // state 1
[\tone, \pad], // state 2
[] // state 3
];
294 SuperCollider for the Creative Musician
Examples of algorithms for progressing through states might include cycling through states in a
specific order or choosing states at random. These options are elegantly handled with patterns.
Code Example 11.11, which could be included within a block of setup code, defines algorithms
for selecting states and durations, and a routine that performs them. Pattern substitutions are
possible. For example, Pshuf will determine a random order that repeats, while Pxrand will
move randomly from one state to another without ever selecting the same state twice in a row.
(
~stateStream = Pseq([0, 1, 2], inf).asStream;
~durStream = Pwhite(20, 80, inf).asStream;
~statePerformer = Routine({
loop{
~playState.(~stateStream.next);
~durStream.next.wait;
}
});
)
~statePerformer.play;
For a more complex state sequence, Markov chains are a popular choice. A Markov chain describes
a sequence of states in which the probability of selecting a particular state depends on the previous
state. If we are currently in state zero, we might want a 30% chance of moving to state one, and
a 70% chance of moving to state two. A complete hypothetical flowchart appears in Figure 11.2.
FIGURE 11.2 A flowchart depicting a Markov chain that governs the probabilities of moving from one
state to another in a small state-based composition.
C h a p t er 11: A Stat e-Bas ed St r u ct u r e 295
Implementing a Markov chain involves adding a second array (of the same size as the array
of states) that describes the possible target states from a given state, as well as the probabilities
of moving to each of those states. Using an Event allows us to provide meaningful names for
the data, shown in Code Example 11.12. In this example, if we start in state zero, there is a
0% chance of staying in state zero, a 30% chance of moving to state one, and a 70% chance
of moving to state two. Note that if we were to start in state three (the “everything off” state),
all three of the other states are equally likely, but the Markov chain will never return to state
three, since the probability of moving to state three is 0% for all four states. In fact, the last
item in the Array could be removed completely; if state three represents the end of the perfor-
mance, there is no need to store information about what comes next.
~targets = [
(states: [0, 1, 2, 3], weights: [0, 0.3, 0.7, 0]),
(states: [0, 1, 2, 3], weights: [0.9, 0, 0.1, 0]),
(states: [0, 1, 2, 3], weights: [0.5, 0.5, 0, 0]),
(states: [0, 1, 2, 3], weights: [1/3, 1/3, 1/3, 0])
];
Using this Markov chain approach, a performance routine (pictured in Code Example 11.13)
might begin by selecting a random state, after which it enters a loop. In this loop, it plays the
next state, determines where to go next, and waits for one minute. This example also includes
a second block of code that can be used to end the performance.
(
~statePerformer = Routine({
var next = [0, 1, 2].choose;
loop{
~playState.(next);
next = ~targets[next][\states].wchoose(~targets[next][\weights]);
wait(60);
};
});
)
296 SuperCollider for the Creative Musician
~statePerformer.play;
(
var off = ~states.collect({ |state| state == [] }).indexOf(true);
~statePerformer.stop;
~playState.(off);
)
In this specific example, we know that state three is the empty state, so it’s true that we
could stop the performance routine and simply run ~playState.(3). However, this expres-
sion would need to change if we added or removed states later. So instead, we use iteration to
determine the index of the empty state, and then play it. If there is at least one empty state in
the ~states array, the block of code at the end of Code Example 11.13 will always stop the
piece, regardless of the order or number of states.
A separate looping routine, running in parallel, can also be used to algorithmically activate
one-shots. Code Example 11.14 assumes your one-shots follow the model from Section 11.5,
stored as functions inside an Event. The loop begins with a wait, to avoid always triggering a
one-shot at the instant the routine begins.
(
~oneShotPerformer = Routine({
loop{
exprand(5, 60).wait;
~oneShots.choose.();
}
});
)
~oneShotPerformer.play;
We have options for automating one-shots beyond this simple example of random play-and-
wait. You may, for example, want to associate certain one-shots with certain states, so that each
state only permits randomization within a particular subset of one-shots. Code Example 11.15
demonstrates an example of how to apply such restrictions. In the ~oneShotTargets array,
“foo” and “bar” are defined as valid one-shot options while in state 0, while only “foo” is pos-
sible in state 1 and only “bar” is possible in state 2. No one-shots are possible in the empty
state. Because our ~playState function automatically updates the current state and stores it
in the ~currState variable, we can rely on it when selecting one-shots. The routine in Code
Example 11.15 identifies the index of the current state, selects a one-shot based on the current
state, and plays it.
C h a p t er 11: A Stat e-Bas ed St r u ct u r e 297
(
~oneShotTargets = [
[\foo, \bar], // valid options while in state 0
[\foo], // ... state 1
[\bar], // ... state 2
[] // no one-shots are allowed in the empty state
];
~oneShotPerformer = Routine({
var currIndex, selectedOneShot;
loop{
exprand(5, 45).wait;
currIndex = ~states.collect({ |n|
n == ~currState;
}).indexOf(true);
selectedOneShot = ~oneShotTargets[currIndex].choose;
~oneShots[selectedOneShot].();
}
});
)
~oneShotPerformer.play;
To conclude, this chapter offers two Companion Code bundles that synthesize concepts from
this chapter into a complete state-based composition. Companion Code 11.1 relies on a fully
automated Markov chain approach, while Companion code 11.2 uses a GUI for real-time
human interaction.
CHAPTER 12
LIVE CODING
12.1 Overview
Live coding is an improvisatory performance practice that emerged in the early 2000s and
remains an area of active development and exploration today. It encompasses a diverse set
of styles and aesthetics that extend beyond the boundaries of music and into the broader
digital arts, unified by the defining characteristic of creating the source code for a perfor-
mance in real-time, as the performance is happening. Since the inception of live coding,
practitioners have established various organizations (notably TOPLAP) for the purposes of
promoting, disseminating, and celebrating the practice.1 SC is one of a growing collection
of software platforms that pair nicely with live coding, alongside ChucK, FoxDot, Hydra,
ixi lang, Sonic Pi, TidalCycles, and others (notably, many of these platforms actually use the
SC audio server as their backend synthesis engine). Like playing a musical instrument, live
coding is an embodied, public-facing practice; it involves a human performer onstage, almost
always accompanied by a projected computer screen display. Live coding is also a natural fit
for livestreams and other online venues.
Overall, live coding is a fundamentally different approach to music-making than the
event- and state-based approaches explored in previous chapters. It demands focus and prep-
aration from the performer, and blurs the distinction between composing and performing.
While live coding bears a slight resemblance to state-based composition (in the sense that
we build an interface and then have the option to make improvisatory choices during perfor-
mance), these similarities are few and superficial. In a live coding context, we are not simply
manipulating pre-made code but also writing it from scratch. It’s a bit like performing on a
musical instrument while simultaneously building and modifying that instrument, all in front
of a live audience! The coding environment itself becomes the performance interface. In a
sense, we’re also taking a step back from the highly “prepared” approaches of the previous two
chapters and returning to a more dynamic (and sometimes messy) approach of running code
on a line-by-line and chunk-by-chunk basis.
Why pursue live coding? In addition to being fun and exciting, it’s also a totally new
mindset for most artists. It pushes you out of your comfort zone and demands a more time-
sensitive decision-making process, but as a result, tends to elicit sounds and ideas that might
otherwise go undiscovered. It’s a different sort of experience for audience members, too, who
get to hear your sounds while watching your code evolve. Live coding can feel challenging and
perhaps a bit awkward for a newcomer, but it’s important not to get discouraged. To develop
a sense of confidence and personal style, live coding requires an investment of time, practice,
and dedication, plus a solid familiarity with the tools at your disposal—just like learning to
play a musical instrument.
Among compelling and memorable quotes on live coding is one from Sam Aaron, the cre-
ator of Sonic Pi, who reassures us that “there are no mistakes, only opportunities,” meaning
that you might begin a live coded performance with some pre-formed musical ideas, but
during the course of active listening, spontaneous choices, and a few inevitable missteps, you
SuperCollider for the Creative Musician. Eli Fieldsteel, Oxford University Press. © Oxford University Press 2024.
DOI: 10.1093/oso/9780197616994.003.0012
C h a p t er 12: Li v e C o d i n g 299
might find yourself on an unpredictable and discovery-rich musical journey.2 It’s all part of
the experience!
s.boot;
x = ~fn.play;
And, suppose we want to change this sound while it’s playing (maybe adjust the filter fre-
quency or reduce the amplitude). Well, it’s a bit late for that. We should have planned ahead!
This UGen function has no arguments, just a handful of constant values, so sending set
messages won’t accomplish anything. It’s also important to realize that redefining the UGen
function while the sound is playing also has no effect:
The sound remains the same because the function ~fn that defines the algorithm is totally
independent from the Synth x that executes the algorithm. The former exists in the language,
and the latter on the audio server. Redefining one has no effect on the other. So, with this ap-
proach, our best option is probably to remove the current Synth and create another:
x.release(3);
x = ~fn.play;
However, this is awkward, disruptive, and the continuity is artificial. The question
becomes: How can we change a live sound in real-time, without having to plan all the moves
in advance?
Proxies were briefly mentioned in Section 5.5, in order to introduce the pattern proxies
Pdef, Pbindef, and Pdefn (subclasses of PatternProxy), which facilitate real-time changes to
event streams and value streams. This chapter focuses on two additional classes, NodeProxy
and TaskProxy, which let us apply proxy functionality to Synths and routines. This chapter
aims to outline the essential features of JITLib and get you tinkering with live coding as
quickly as possible. In the SC help documentation, there are several JITLib tutorial files, no-
tably the four-part “JITLib basic concepts” series, which are full of interesting code morsels
and worth reading for those seeking more depth.4
12.3 NodeProxy
An important note before you approach this section: If you’ve called ServerBoot.removeAll
during your current session, you must either recompile the class library, or quit and restart
SC. On startup, ServerBoot automatically includes registered actions which, if absent, will
prevent some JITLib features from working correctly.
s.boot;
~n = NodeProxy.new;
~n.play;
Every proxy has a source, which is the content for which the proxy serves as a placeholder.
Initially, a NodeProxy’s source is nil, which results in silence when played:
Throughout this chapter, keep in mind that live coding is a dynamic, improvisatory experi-
ence. A book isn’t an optimal venue for conveying the “flow” of a live coding session, because
it can’t easily or concisely capture code modifications and deletions that take place over time.
When changing a proxy’s source, you might type out a new line (as pictured above), or you
might simply overwrite the first line. In any case, you’re encouraged to improvise and explore
as you follow along with these examples and figure out a style that suits you!
On a basic level, we’ve solved the problem presented in the previous section. Instead of
having to remove an old Synth and create a new one, we now have a singular object, whose
content can be spontaneously altered. However, this change is abrupt, and this might not be
what we want. Every NodeProxy also has a fadeTime that defaults to 0.02 seconds, which
defines the length of a crossfade that is applied whenever the source changes:
~n.fadeTime = 1;
If the source of a new NodeProxy is an audio rate signal, it automatically becomes an audio
rate proxy, and the same is true for control rate sources/proxies. When a NodeProxy’s source
is defined, it automatically plays to a private audio bus or control bus. This bus index, too, is
selected automatically. When we play a NodeProxy, what we’re actually doing is establishing a
routing connection from the private bus to hardware output channels, and as a result, we hear
the signal. The stop message undoes this routing connection but leaves the source unchanged
and still playing on its private bus. The stop method accepts an optional fade time. The play
method also accepts a fade time, but because it’s not the first argument, it must be explicitly
specified by keyword if provided all by itself:
~n.stop(3);
~n.play(fadeTime: 3);
The clear message, which also takes an optional fade time, will fully reset a NodeProxy,
severing the monitoring connection and resetting its source to nil:
~n.clear(3);
302 SuperCollider for the Creative Musician
~sig = NodeProxy().fadeTime_(3).play;
Suppose we want to pass this oscillator through a low-pass filter. While it’s true that we could
just redefine the source to include a filter, this approach doesn’t showcase the modularity of
NodeProxy. Instead, we’ll create and play a second proxy that represents a filter effect:
~filt = NodeProxy().fadeTime_(3).play;
And now, with ~sig still playing, we define the source of the filter proxy and supply the os-
cillator proxy as the filter’s input. We also stop monitoring the square wave proxy, which is
now being indirectly monitored through the filter proxy. These two statements can be run
simultaneously, or one after the other. How you evaluate these statements primarily depends
on how/whether you want the filtered and unfiltered sounds to overlap.
(
~filt.source = {RLPF.ar(~sig.ar(2), 800, 0.1)};
~sig.stop(3);
)
When we nest a NodeProxy inside of another, best practice is to supply the proxy rate and
number of channels. Providing only the name (e.g., ~sig) may work in some cases but may
produce warning messages about channel size mismatches in others.
Let’s add another component to our signal flow, but in the upstream direction instead
of downstream: we’ll modulate the filter’s cutoff frequency with an LFO. The process here
is similar: we create a new proxy, define its source, and plug it into our existing NodeProxy
infrastructure. This new proxy automatically becomes a control rate proxy, due to its control
rate source. Note that we don’t play the LFO proxy; its job is not to be heard, but rather to
influence some parameter of another signal.
~lfo = NodeProxy();
With these basic concepts in mind, now is an excellent time to take the wheel and start
modifying these sounds on your own! Or, if you’re finished, you can simply end the
performance:
(
~sig.clear; // then clean up the others
~lfo.clear;
)
(
~a = 5; // create two environment variables
~b = 7;
)
It’s possible to create and use a different environment to manage a totally separate collection
of variables, and then return to the original environment at any time, where all of our original
variables will be waiting for us. Environments let us compartmentalize our data, instead of
having to messily stash everything in one giant box.
A good way to think about managing multiple environments is to imagine them in
a spring-loaded vertical stack, with the current environment on the top, as illustrated in
C h a p t er 12: Li v e C o d i n g 305
Code Example 12.3. At any time, we can push a new environment to the top of the stack,
pressing the others down, and the new topmost environment becomes our current envi-
ronment. When finished, we can pop that environment out of the stack, after which the
others rise to fill the empty space, and the new topmost environment becomes our current
environment.
(
~apple = 17; // store some data
~orange = 19;
)
currentEnvironment; // check the data: only ~apple and ~orange live here
What does all this have to do with live coding? A ProxySpace is a special type of environment
that facilitates the use of NodeProxy objects. We start by pushing a new ProxySpace to the
top of the stack:
p = ProxySpace().push;
Instead of setting fade times on an individual basis, we can set a fade time for the ProxySpace
itself, which will be inherited by all proxies within the environment, although we retain the
ability to override the fade time of an individual NodeProxy:
Code Example 12.4 recreates the square wave/filter/LFO example using ProxySpace. Note
that all proxies in a ProxySpace can be simultaneously cleared by sending a clear message to
the environment itself.
306 SuperCollider for the Creative Musician
~sig.play;
~filt.play;
~sig.stop(3);
p.clear(5);
p.pop;
The beauty of ProxySpace is that it provides full access to the NodeProxy infrastructure
without us ever having to type “NodeProxy” or “Ndef ”. The primary disadvantage of using
ProxySpace, however, is that every environment variable is an instance of NodeProxy, and
therefore they cannot be used to contain other types of data, like Buffers or GUI classes,
because these types of objects are not valid sources for a NodeProxy. For example, the
following line of code looks perfectly innocent, but fails spectacularly while inside of a
ProxySpace:
Luckily, the global interpreter variables (lowercase letters a through z) can still be used nor-
mally inside of a ProxySpace. There are only twenty-six of these, so using them on an indi-
vidual basis is not sustainable! Instead, Code Example 12.5 demonstrates a better approach,
involving the creation of a multilevel collection-type structure, such as nested Events, all
contained within a single interpreter variable.
C h a p t er 12: Li v e C o d i n g 307
When finished using a ProxySpace, it’s always a good idea to pop it from the stack and return
to the previous environment:
p.pop;
TIP.RAND(); S
IMILARITIES BETWEEN NDEF AND
PROXYSPACE
The Ndef and ProxySpace styles are not actually that different from each other. When
using an Ndef-based approach, there is a singular instance of ProxySpace used by all
Ndefs, which resides in the background. This ProxySpace is accessible by calling
proxyspace on any Ndef. You can then chain additional methods that would or-
dinarily be sent to an instance of ProxySpace, like setting a global fadeTime for all
Ndefs. The following example demonstrates these similarities:
Ndef(\a).proxyspace.fadeTime_(5);
Ndef(\b).fadeTime; // -> 5
308 SuperCollider for the Creative Musician
Whichever of these three styles (NodeProxy, Ndef, ProxySpace) you decide to use is ulti-
mately a matter of individual preference. Speaking from personal experience, most users seem
to prefer Ndef, perhaps for its concise style and avoidance of environment variables, while
retaining the ability to freely use environment variables as needed. Keep in mind, however,
that these styles aren’t amenable to a “mix-and-match” approach. For a live coding session, it’s
necessary to pick one style and stick with it.
Ndef(\t).fadeTime_(2).play;
(
Ndef(\t, {
arg freq = 200, width = 0.5;
VarSaw.ar(freq + [0, 2], width: width, mul: 0.05);
});
)
(
// after a source change, freq remains at 400, even though the de-
fault is 200
Ndef(\t, {
arg freq = 200, width = 0.5;
var sig = SinOsc.ar(freq + [0, 2], mul: 0.1);
sig = sig * LFPulse.kr(6, 0, 0.3).lag(0.03);
});
)
C h a p t er 12: Li v e C o d i n g 309
(
Ndef(\lfo, {LFTri.kr(0.25).exprange(300, 1500)});
Ndef(\t).xset(\freq, Ndef(\lfo));
)
(
// first, create a clock at 108 bpm and post beat information
t = TempoClock(108/60);
t.schedAbs(0, {t.beats.postln; 1;});
)
(
// now, any source change to the proxy will be quantized:
Ndef(\p, { |freq = 1000|
var trig, sig;
trig = Impulse.kr(t.tempo);
sig = SinOsc.ar(freq) * 0.1 ! 2;
sig = sig * Env.perc(0, 0.1).kr(0, trig);
});
)
(
Ndef(\p).clear; // clean up
t.stop;
)
Note that using clock and quant does not require a fadeTime of 0, but is used here to create a
quantized source change that occurs precisely on a desired beat. If a source change occurs and
the fadeTime is greater than zero, the crossfade will begin on the next beat specified by quant.
310 SuperCollider for the Creative Musician
When using ProxySpace instead of Ndef, the instance of ProxySpace can be assigned
clock and quant values, and all of the proxies within that space automatically inherit this in-
formation. At the same time, we retain the ability to override an individual proxy’s attributes
to be different from that of its environment. These techniques are demonstrated in Code
Example 12.8.
(
// create a clock, post beats, and push a new ProxySpace
t = TempoClock(108/60);
t.schedAbs(0, {t.beats.postln; 1;});
p = ProxySpace(clock: t).quant_(8).push; // all proxies inherit
clock/quant
)
~sig.play;
(
// source changes are quantized to the nearest beat multiple of 8
~sig = {
var freq, sig;
freq = ([57, 60, 62, 64, 67, 69, 71]).scramble
.collect({ |n| n + [0, 0.1] }).flat.midicps;
sig = Splay.ar(SinOsc.ar(freq)) * 0.05;
};
)
(
// change now occurs immediately
~sig = {
var freq, sig;
freq = ([57, 60, 62, 64, 67, 69, 71] - 2).scramble
.collect({ |n| n + [0, 0.1] }).flat.midicps;
sig = Splay.ar(SinOsc.ar(freq)) * 0.05;
};
)
(
p.clear;
t.stop;
)
p.pop;
C h a p t er 12: Li v e C o d i n g 311
Ndef(\sines).play;
Ndef(\sines).numChannels; // -> 2
(
// Define a 2-channel source
Ndef(\sines, {
var sig = SinOsc.ar([425, 500]);
sig = sig * Decay2.ar(Impulse.ar([2, 3]), 0.005, 0.3, 0.1);
});
)
(
// Define a 4-channel source. No reshaping is done, and excess
channels are mixed with the lowest two. A notification appears in
the post window.
Ndef(\sines, {
var sig = SinOsc.ar([425, 500, 750, 850]);
sig = sig * Decay2.ar(Impulse.ar([2, 3, 4, 5]), 0.005, 0.3, 0.1);
});
)
312 SuperCollider for the Creative Musician
Ndef(\sines).numChannels; // -> 2
(
// Defining a 4-channel source now reshapes the proxy. All four
signals are on separate channels. If working with only two
speakers, we'll only hear the first two channels.
Ndef(\sines, {
var sig = SinOsc.ar([425, 500, 750, 850]);
sig = sig * Decay2.ar(Impulse.ar([2, 3, 4, 5]), 0.005, 0.3, 0.1);
});
)
Ndef(\sines).numChannels; // -> 4
(
// An elastic proxy will shrink to accommodate a smaller source
Ndef(\sines, {
var sig = SinOsc.ar([925, 1100]);
sig = sig * Decay2.ar(Impulse.ar([6, 7]), 0.005, 0.3, 0.1);
});
)
Ndef(\sines).numChannels; // -> 2
Ndef(\sines).clear;
(
b = Buffer.read(s, Platform.resourceDir +/+ "sounds/a11wlk01.wav");
SynthDef(\play, {
arg atk = 0.002, rel = 0.08, buf = 0,
C h a p t er 12: Li v e C o d i n g 313
t = TempoClock(108/60);
t.schedAbs(0, {t.beats.postln; 1;});
)
(
// source change occurs on a quantized beat
Ndef(\a, Pbind(
\instrument, \play,
\dur, 1/2,
\buf, b,
\amp, 0.2,
\start, 36000,
));
)
(
// pattern-based proxies are crossfaded as expected
Ndef(\a, Pbind(
\instrument, \play,
\dur, 1/4,
\buf, b,
\amp, 0.3,
\start, Pwhite(50000, 70000),
\rate, -4.midiratio,
));
)
(
Ndef(\reverb, {
314 SuperCollider for the Creative Musician
var sig;
sig = Ndef.ar(\a, 2);
sig = LPF.ar(GVerb.ar(sig.sum, 300, 5), 1500) * 0.2;
});
)
(
// clean up:
Ndef.all.do({ |n| n.clear });
t.stop;
)
12.5 TaskProxy
TaskProxy serves as a proxy for a routine (introduced in Section 5.2), allowing us to manip-
ulate a timed sequence in real-time, even when that sequence isn’t fully defined.6 Though it’s
possible to use TaskProxy directly (similar to using NodeProxy), here we will only focus on its
subclass Tdef, which (like Ndef) inherits all the methods of its parent and avoids the need to
use environment variables. Note that a Tdef’s source should not be an instance of a routine,
but rather a function that would ordinarily appear inside of one.
Code Example 12.11 provides a basic demonstration. If a TaskProxy is in a “playing”
state when its source is defined, it will begin executing its source function immediately. If the
source stream is finite, the proxy will remain in a playing state when finished and will begin
executing again as soon as a new source is supplied—even if the new source is identical to the
previous one. Similarly, if a TaskProxy has reached its end, the play message will restart it.
Like routines, an infinite loop is a valid source for a TaskProxy (just be sure to include a wait
time!). When paused and resumed, a TaskProxy “remembers” its position and continues from
that point. When stopped, it will start over from the beginning when played again. Like other
proxies, clear will stop a TaskProxy and reset its source to nil.
Tdef(\t).play;
(
// a finite-length source — execution begins immediately
Tdef(\t, {
3.do{
[6, 7, 8, 9].scramble.postln;
0.5.wait;
};
"done.".postln
});
)
C h a p t er 12: Li v e C o d i n g 315
Tdef(\t).play; // do it again
(
// a new, infinite-length source
Tdef(\t, {
~count = Pseq((0..9), inf).asStream;
loop{
~count.next.postln;
0.25.wait;
};
});
)
Tdef(\t).pause;
Tdef(\t).stop;
Tdef(\t).clear;
Every TaskProxy runs on a clock and can be quantized, as shown in Code Example 12.12. If
unspecified, a Tdef uses the default TempoClock (60 beats per minute) with a quant value of
1. Usage is essentially identical to Ndef. Like \dur values in a Pbind, wait times in a Tdef are
interpreted as beat values with respect to its clock.
(
// create a verbose clock at 108 bpm
t = TempoClock(108/60);
t.schedAbs(0, {t.beats.postln; 1;});
)
(
// post a visual effect, execution begins on next quantized beat
Tdef(\ticks, {
loop{
4.do{ |n|
"*---".rotate(n).postln;
316 SuperCollider for the Creative Musician
0.25.wait;
}
}
});
)
(
// clean up
Tdef(\ticks).clear;
t.stop;
)
Code Examples 12.11 and 12.12 aim to demonstrate basic usage, but mainly post values and
don’t really do anything useful. How might a Tdef be used in a practical setting? Whenever
you have some sequence of actions to be executed, Tdef is always a reasonable choice, especially
if you envision dynamically changing the sequence as it runs. One option is to automate the
sending of set messages to a NodeProxy. In Code Example 12.13, we have an Ndef that plays
a sustained chord. Instead of repeatedly setting new note values ourselves, we can delegate this
job to a Tdef.
Ndef(\pad).fadeTime_(3).play;
(
Ndef(\pad, { |notes = #[43, 50, 59, 66]|
var sig;
sig = notes.collect({ |n|
4.collect({
LFTri.ar(
freq: (n + LFNoise2.kr(0.2).bipolar(0.25)).midicps,
mul: 0.1
);
}).sum
});
sig = Splay.ar(sig.scramble, 0.5);
sig = LPF.ar(sig, notes[3]
.midicps * 2);
});
)
C h a p t er 12: Li v e C o d i n g 317
(
Tdef(\seq, {
var chords = Pseq([
[48, 55, 62, 64],
[41, 48, 57, 64],
[55, 59, 64, 69],
[43, 50, 59, 66],
], inf).asStream;
loop{
Ndef(\pad).xset(\notes, chords.next);
8.wait;
}
}).play
)
(
// clean up
Tdef(\seq).stop;
Ndef(\pad).clear(8);
)
TIP.RAND(); A
RRAY ARGUMENTS IN A UGEN
FUNCTION
It is possible for an argument declared in a UGen function (e.g., in a SynthDef or
NodeProxy function) to be an array of numbers, as is the case in Code Example 12.13.
However, once an array argument is declared, its size must remain constant.
Additionally, the argument’s default value must be declared as a literal array, which
involves preceding the array with a hash symbol (#). Literal arrays were briefly
mentioned in Section 1.6.9, but not discussed in detail.
real-time audio processing might strain your computer. Plus, we’d still need to copy the code
manually when watching the video later.
The History class, demonstrated in Code Example 12.14, provides a lightweight solution
for recording code as it is evaluated over time. When a history session is started, it captures
every subsequent piece of code that runs, exactly when it runs, until the user explicitly ends
the history session.
Once a history session has ended, we have the option of recording the code history to a text
file, so that it can be studied, sent to a collaborator, or replayed. The saveCS method (short
for “save as compile string,” writes a file to a path as a List of individual code evaluations,
formatted as strings, which can be loaded in as the current history using History.loadCS,
and played back automatically with History.play. This will feel as if a “ghost” version of
yourself from the past is typing at your keyboard! If desired, a history playback can be stopped
partway through with History.stop. Alternatively, a history can be saved with saveStory,
which produces a more human-readable version, complete with commented timestamps. This
type of save is meant to be played back manually and is well-suited for studying or editing.
s.boot;
Ndef(\k).fadeTime_(1).play;
(
Ndef(\k, {
var sig, mod;
mod = LFSaw.kr(0.3, 1).range(2, 40);
sig = LFTri.ar([250, 253], mul: 0.1);
sig = sig * LFTri.kr(mod).unipolar(1);
});
)
Ndef(\k).clear(3);
s.quit;
History.play; // replay
Notes
1 https://toplap.org/about/.
2 https://sonic-pi.net/tutorial.html.
3 https://doc.sccode.org/Overviews/JITLib.html.
4 https://doc.sccode.org/Tutorials/JITLib/jitlib_basic_concepts_01.html.
5 Technically, when calling xset on a NodeProxy, this method does not directly “crossfade” between
two argument values (which would create a glissando or something similar); rather, an xset message
instantiates a new source Synth using the updated argument value and crossfades between it and the
older Synth. This behavior can be confirmed by visually monitoring the Node tree.
6 TaskProxy is a reference to the Task class, a close relative of routines. Don’t be distracted by the
fact that this class is not named “RoutineProxy”—routines and tasks are very similar, and both are
subclasses of Stream. Though tasks and routines are not identical nor fully interchangeable, it is rare
during practical usage to encounter an objective can be handled by one but not the other.
INDEX
For the benefit of digital users, indexed terms that span two pages (e.g., 52–53) may, on occasion, appear on only
one of those pages.
Page numbers followed by f refer to figure; t to table; c to code example; tr to tip.rand().
absolute file paths. See resolving relative file paths control block, 38, 193
actions of GUI elements, 238–39 ControlSpec, 239–41
addActions for Nodes, 195, 196 conversions
ADSR envelope, 52 decibels to amplitude, 71
aliasing, 37–38, 92–93 MIDI note numbers to Hertz, 71
all-pass filter. See delay effect (with feedback) semitones to frequency ratios, 71
amplitude, 40, 41, 44tr, 52, 71, 81, 104, 162–63
animation of GUI elements, 247–48 debugging. See polling a UGen; see postln; see trace
AppClock, 243–44 MIDI data, 212
arguments default SynthDef, the, 162
for functions, 24–25 defer. See AppClock
for methods, 8 delay effect, 133, 137, 199–200
for NodeProxies, 308–9 with feedback, 203–4
for UGens, 40–42 Dictionary
for UGen functions, 43–45 as storage mechanism for musical events, 269–70
in a SynthDef, 68–69, 123 digital audio, 37–38
trigger-type, 59c, 59, 126 doneActions, 54, 124, 279
Array, 22–24. See also dup dup, 23, 60
as storage mechanism for Buffers, 119 DynKlang, 83
as storage mechanism for musical DynKlank, 108
events, 266–69 echo. See delay effect
arguments in a SynthDef or NodeProxy, 317, 317tr
audio setup, 35, 41, 41tr, 185–86 effects
inserts vs. sends, 183
binary operators, 18, 18t enclosures, 7, 9
bitcrushing, 112–13 Env, 55–59, 142, See also Pseg
Blip, 82 EnvGen, 55–59
block size. See control block Environment, 304–5
Boolean values, 21, 123 escape character, 20c, 20
bouncing to disk. See recording to an audio file evaluating code
brown noise, 99 multi-line block, 9
bufnum, 117 single line, 7
BufRd, 127–28 Event
Buf Wr, 134–38 as module in a state-based composition, 281
busses, 183–88 as playable action, 160–66
resetting allocation counter, 188 as storage mechanism, 120, 160
midi type, 219–20
classes, 6 note type, 161–66
client, the. See language, the rest type, 170–73
clipping, 109 EventStreamPlayer, 167, 285–88
smooth variants, 110–11 exclamation mark. See dup
CmdPeriod, 218
color, 237 fadeTime, 46, 301. See also release
comb filter. See delay effect (with feedback) feedback
command-period (keyboard shortcut), 42 acoustic, 190, 190–91tr
comments, 12–13 delay lines, 203
322 Index