ALSA

Virtual Midi – bridging rtpmidi to Reaper and Bitwig: A Case Study for the Behringer X-Touch.

  • by

The Linux ALSA MIDI subsystem has a lot of capabilities and it takes a while to learn all of the bits and pieces that are available. Such was the case when one of our customers wanted to connect Reaper to an external RTP-MIDI capable device (a Behringer X-Touch). The solution was the snd-virmidi kernel module. This post will describe what snd-virmidi does and how it can be used to bridge different types of software.

The types of Linux device Access

MIDI hardware devices connected to Linux show up through simultaneously through two distinct mechanisms.

  • the RAW MIDI interface
  • the SEQ (sequencer) MIDI interface

The figure below illustrates this situation. On the left is the “RAW” MIDI software interface and on the right is the “SEQ” MIDI interface. When an external device is connected, these two distinct interfaces are created for it.

The RAW interface

The RAW interface for software programs presents MIDI data as an uninterpreted byte stream. A ‘noteOn’ message will appear as a sequence of bytes (0x90, 0x??, 0x??) for a software program to interpret according to the MIDI standard. The stream of bytes on this interface is not timestamped and is not interpreted.

Read More »Virtual Midi – bridging rtpmidi to Reaper and Bitwig: A Case Study for the Behringer X-Touch.

Announcing McLaren Synth Kit

The “McLaren Synth Kit” is an Objective-C framework for using MIDI and Audio on Linux computers with the GNUstep programming environment. It is distributed as a project including libraries, headers and example programs at https://github.com/mclarenlabs/libs-mclaren-alpha. You can use it to experiment with sound synthesis for your own personal projects. The project is designed to provide ready-to-compile examples after you clone the repo.

Working with sound is a delicate endeavor. The Synth Kit does a lot of the low-level work of opening devices, managing an audio thread and copying MIDI events to and from dispatch queues. This leaves the audio programmer free to think about designing sounds as a graph of processing units, called Voices.

Standard Voices in the Synth Kit provide envelopes and oscillators of various types, filters and a reverb algorithm. Using the features of modern Objective-C (blocks, ARC and dispatch queues) the Synth Kit makes programming sounds easy, or at least “easier.”

Read More »Announcing McLaren Synth Kit

VSCode, Ubuntu Snaps and ALSA Sound Development

Here at McLaren Labs we like to try all sorts of tools and development environments to see how they work together. We’ve been using vscode (https://code.visualstudio.com/) on and off for about a year, but only recently decided to try using it for a more complete edit/debug/run cycle for an Objective-C Synthesizer project. We ran into an unexpected interaction between the Snap environment of vscode and the ALSA (Advanced Linux Sound Architecture) PCM interface.

The Symptoms

What we discovered was that when running a program that attempted to access an ALSA sound device was that the program malfunctioned. It worked correctly in a “normal” terminal, however.

We reproduced the behavior using the default aplay command that is available in alsa-utils. See the screenshot below for what should appear in a terminal when it is run, playing an Ubuntu standard sound called “Front-Left”. Unless otherwise specified, this command opens the “default” sound device.

If your Ubuntu sound system is set up correctly, you will hear a woman saying “Front Left” … and it will come out of your front-left speaker.

Read More »VSCode, Ubuntu Snaps and ALSA Sound Development

Why McLaren Labs uses Objective-C

McLaren Labs was started with the idea that music and media creation on Linux should be as easy and fluid as Mac OSX. We had been inspired by AVFoundation and the modular way its pieces fit together. We loved being able to build media pipelines with sources and sinks that cleaned up after themselves when you were done with them them.

Many of the facets of the OSX components we liked were provided by ObjC features enabled by the Clang compiler and LLVM tool suite. LLVM has revolutionized language development by paving the way for Swift and Rust. Back at the time we were getting started, Swift on Linux was gaining traction and we considered adopting it. However, after some initial explorations with Swift and libdispatch, we discovered that libdispatch just wasn’t ready with Swift on Linux. That was in 2015 – Swift on Linux is much more mature. The equation might be different today … but it might not too.

Read More »Why McLaren Labs uses Objective-C

Korg Microkey Air 37 Bluetooth MIDI Keyboard with Raspberry Pi

Do you want to have even more fun with your musical Raspberry Pi? Use an external Bluetooth MIDI keyboard with it! For this project, you need to download and compile a new version of the Bluetooth drivers for Linux. If you don’t already have compilation tools installed, you’ll need those too.

We will tell you how to compile and install the necessary Bluetooth driver, and then describe how to pair a Bluetooth MIDI Keyboard.

Prerequisites

You should have a Raspberry Pi 3 with built-in Bluetooth adaptor and Raspbian Stretch OS.
Read More »Korg Microkey Air 37 Bluetooth MIDI Keyboard with Raspberry Pi

Punching it Up: Low-latency notes

Sometimes you want a really “punchy” sound. To a musician, this means a sound with a rapid attack and a quick reaction from the keyboard. To a software developer, this means a sound with a very low attack rate and a very low latency through the synthesizer from the keyboard to the audio output. To make a punchy sound, we’re going to use an external USB audio card, and also adjust the sound card settings.

What is Latency?

Latency is the delay from when you hit a note on the keyboard until you hear the sound. Musicians deal with latency all the time, because there are audio delays inherent in all of our equipment. Pipe Organ players have long been accustomed to experiencing a delay between the keyboard action and the sounding of a pipe. However,  organists learn to adapt.
If the value of the latency between the keypress and the sound is constant, a musician has a good chance of being able to compensate. If the latency is unpredictable, even a tiny bit, then a musician will have a harder time keeping their music sounding rhythmic. We are going to try to adjust our organ to reduce latency, and also the variance of the latency.

Why not the internal sound device?

The internal sound chip of the Raspberry Pi 3 is good enough for desktop sounds and casual listening to music, but if you want clearer sounds, and lower latency you will want an external USB sound card. The actual experience you have will vary with the sound card you choose. Here at McLaren Labs we use a Yamaha MG-10XU mixer with USB input as an external sound device and it works great.
Read about how we reduced latency and created a “punchy” sound below the break.
Read More »Punching it Up: Low-latency notes

Using Yoshimi Software Synthesizer on the Raspberry Pi 3B

The RaspberryPi 3B is an amazingly powerful computer for the price. It provides quad-core computing power for just $35. This makes it more than capable for music experimentation and learning, and experimenting with Software Synthesizers is a fun way to learn about sound.
 

Yoshimi Software Synthesizer running on the Raspberry Pi 3B


This article gives some hints for setting up a Software Synthesizer on your Raspberry Pi. We’ll talk about installing and configuring Yoshimi. Of course, this “software synth” works great with McLaren Labrtpmidi, so you can experiment with a network of Raspberry Pi synthesizers too.
Read More »Using Yoshimi Software Synthesizer on the Raspberry Pi 3B