FAQ

How is each Audioglyph unique?

On-chain data seeds a pseudo random number generator which is used to set parameters on nodes in a signal processing graph.

The structure of this graph and the probability distributions of the parameters have been carefully designed to balance generating a wide range of sounds with ensuring that every Audioglyph is musical and interesting.

What does it mean for Audioglyphs to be infinite?

While there is a small chance that some Audioglyphs will repeat, the melodic sequences in most Audioglyphs are based on irrational numbers and will continue forever without repeating.

How do Audioglyphs run in the browser?

Audioglyphs run in the browser using the web audio standard, which is recently fully supported across common browsers. Their processing graph makes use of both native nodes implemented by the browser and custom nodes written in javascript and in c++ compiled to run in the browser using web assembly.

The audio generated locally is entirely uncompressed and playback quality is lossless. The code required to play back all 10,000 glyphs is only a few hundred kilobytes - around the same size as two seconds of lossless streaming on apple music.

Running in browser is important for local generation to be practical - it provides cross platform audio libraries and its sandbox resolves the security issues that would be present with running untrusted native code to generate audio.

How does the sound generation work?

Audio is produced by a DSP graph with all modulation and sequencing accomplished using continuous audio signals. This is an approach common in analog modular synthesizers but different from most music creation software.

Melodies are derived from a moiré pattern between two unrelated frequencies. This approach allows complex patterns to emerge from few simple inputs. A clock oscillator controls sampling frequency - each cycle of this oscillator triggers a value to be sampled from a second oscillator running at an incommensurable frequency. The sampled value is then quantized to a value in a randomized scale. When the quantized value changes from the previous step, an envelope is triggered to modulate the amplitude and filter frequency of the main voice.

Drum sounds are based on PCM samples of a Roland TR-606 drum machine recorded to tape before being re-recorded digitally. These samples are randomly pitched and high pass filtered to produce variation between glyphs. The rhythms for the three drum voices are randomly generated, but with probabilities skewed towards common patterns in popular music. Many glyphs incorporate polymeter to create more complex rhythms from a few simple phrases.

How do the visualizations work?

Audioglyph visualizations are based on the same moiré pattern used to generate melodies. Two sets of overlapping concentric circles move against each other, creating complex interferance patterns.

Properties like the speed and extent of the motion and the number of circles are based on parameters of the audio. Additional animations are triggered based on events passed from the audio thread.

Colors used in the visiualization are generated using the hsluv color space.

What do you mean by future of music?

"Future of music" is a little hyperbolic, but we are experimenting with some things in Audioglyphs that we think are first steps in exciting directions.

One of these is local synthesis of generative music - this makes the large series of Audioglyphs with infinite duration possible with small file sizes and high audio quality. Audioglyphs variation is based on randomization with seed data from the blockchain, but one of the exciting things about local synthesis is that it can allow other types of customization based on things like heart rate, weather, or surroundings.

We think the most important next step for local synthesis is for a common standard for publishing and playing music to be adopted that allows diverse inputs but preserves privacy. WebAudio is an ideal foundation for this standard because of its multiple implementations and the security features built into the web platform.

Composing generative music for local synthesis has a really different artistic process than normal songwriting. It requires you to think about the space of all possible songs that you would like to create and sketch out its boundaries. For audioglyphs we did this in code, but in the future this will need new tools designed for more creative workflows.

What do you plan to open source?

As part of the development of Audioglyphs, we created a library for managing web audio graphs in a functional style with straighforward and efficent updates and to handle some common setup tasks like loading audio files and worklet processors.

We are currently working on decoupling this from Audioglyph specific code in order to open source it. We hope that it will be useful to others working on similar projects.

What open source libraries do Audioglyphs use?

Audioglyph's reverb is based on open source code from Mutable Instruments eurorack modules. These modules are a joy to use and an endless source of inspiration, both to create music and to write software like Audioglyphs.

Audioglyphs also uses Web3 for interaction with the ethereum blockchain, React and Styled Components for UI, and HSLUV to generate its color scheme.

How are Audioglyphs carbon neutral?

We purchased a carbon offset through Offsetra covering emissions from the minting of Audioglyphs.

How many did the team mint?

One hundred Audioglyphs were reserved for the team.

More