So when are we going to get to kick the latency elephant out of the living room?

Latencies of hardware digital mixers in practical use are sitting at 1-2ms but are very limited in what they can do. Daws are more like 5ms+ (often quite a bit higher) for similar functionality using plugins, very dependent upon interface drivers. I see that some companies have been for a while using methods of isolating cpu cores so that os housekeeping stuff such as irq’s are separated from realtime audio performance to achieve embedded-like latency performance using desktop/server cpu’s, no dedicated dsp required. So why isn’t this part of desktop os’s by now? It could be great to use a racked desktop and a couple of adat (or whatever protocol) interfaces as a flexible digital mixer and recorder for live band use. But the mainstream seems uninterested in making it happen. Just a couple of niche companies out there are doing it and at high cost.

Soft real time operating systems are a thing, but they can’t absolutely guarantee that all deadlines will be achieved.

https://news.ycombinator.com/item?id=27269589

What I gathered from previous reading, Elk runs on select mobile hardware (mobile Intel and ARM I believe). Merging Technologies Masscore for example runs on regular Intel stuff where everyone’s daws, plugins, and interfaces already live.

I guess the real question is, exactly how much can be achieved with the right operating system on commodity hardware, and at what point is some dedicated hardware a necessary part of the solution.

What I have read so far, dedicated hardware is not really required. It’s a matter of isolating cores and therefore isolating audio processes from everything else, as well as some tuning for reducing latencies involved in scheduling, irq’s, etc., removing the requirement of an audio buffer that is needed in a stock os configuration. Assuming that any used plugins don’t require buffers. So then the latency would be audio conversions + any interfacing latencies (ADAT for example) + a possibly required buffer for some plugins according to plugin selection.

One method of this I have read about is running dual kernels where one kernel is dedicated to audio processes on isolated cores. This was a linux config. Reported latency was that of audio conversions. If I’m not mistaken, Masscore also runs a small dedicated kernel for audio processes.

You can achieve a fair bit that way, but you can’t guarantee that some device/driver isn’t going to screw it all up for you.

Latency under 5ms is achievable right now, with or without a real time OS. A real time OS might make it so that glitches are less likely, but you can’t absolutely guarantee that a glitch will never happen.

Exactly how low can be achieved with a real time OS reliably enough that you’d call it as good as a dedicated hardware solution, is a good question.

I should note, I had a system with an M-Audio Delta 1010. Could achieve 2ms round trip, 64 sample buffer at 96K.

Not too shabby. But of course, that uses a bunch of cpu power.

That is the point of running audio processes on dedicated cores / kernel. No other devices are running there. No terminals, graphics, keyboard, mouse, network, etc. So it seems to be, using a couple of dedicated cores to run an audio i/o and processing server.

MOTU 828ES tested at 1.6ms RTL @ 96kHz, 16/16
(buffersize 16 samples with a 16 sample safety buffer)
via a Thunderbolt connection

Whether your system can handle 16 samples without dropping out is the kicker

I think the RME Fireface UFX+ has similarly low latency via TB

Dunno about the Apollo stuff

So theoretically they’re at sub 2ms, but machine dependent, and probably not for most

Yea, that is the kicker. So for feature parity of a small format hardware digital mixer for example: 48k; 16 channels in, 8 channels out; parametric eq, compression, and gate on every input channel; parametric eq and compression on every output channel; 8-16 mono (or half for stereo) additional effects slots to be used as inserts or sends wherever you like; full routing freedom of all inputs to all outputs.

I’m pretty certain that would cause issues for a garden variety desktop daw at lowest latency settings. My desktop with an RME card interface couldn’t do it.

The presonus quantum thunderbolt interface manages 1.9ms round trip with a buffer size of 32.

On Linux you can assign processes to cpusets, so a DAW with a GUI in a separate process could have all of the audio processing done exclusively on dedicated cores. Could do the same for the processes for the audio interface and use an RT kernel to give priority to those and the disks.

I looked out @pipelineaudio’s thread to see what my old measurements for RTL were: here

I have to ask the question of whether anybody can really tell the difference between 4ms actual RTL and 1ms or so? Have there been blind tests done with the pickiest drummers in the world?

What matters is with a real world load. My RME card can at lowest buffer too, but a load like mentioned above for feature parity of a digital mixer sends it stuttering and dying for sure.

For drumming I think it would be noticeable, but I don’t know how offputting it would be. When I had an electronic kit set up, a difference of ~3ms was make or break between 5ms and 8ms. For singing it would definitely be noticeable and possibly the same. Some singers have issues with even 1ms in headphones and have to use an all analog path for monitoring. There may be more at play there than just the time of the latency alone though, such as hearing comb filtering between the sound in their head against the sound in headphones. Throw on a set of headphones and sing a bit and see what you think. Any way, lower latency is always best if you can get it. You can always add latency if wanted for some reason. Keep in mind that if you are monitoring through speakers, more latency is like adding 1 foot of distance for every 1ms to the distance of the speakers from your ears plus any comb filtering or delay from hearing both the source and latent sound together.

This is one person’s subjective opinion:

And by the way, I did some latency testing years ago and talked about it on the cockos forum, using MIDI pads against a metronome at progressively larger buffer sizes, recording the results. As the buffer went up my timing got progressively worse, which could be seen in the peaks of the recordings, and it progressively felt more fatiguing to stay with the metronome.

Going on what I have read, I think what is happening with these proprietary cpu core isolation solutions is running audio i/o and processing on a separate stripped down and latency-tuned linux with the bigger os handling everything else on virtualization hardware, essentially using the bigger os as a remote gui where more latency isn’t a big deal. It seems that none of these companies get into too many details about what they are doing though.

That SOS article is pretty wrong about latency. He says that sometimes manufacturers get it wrong with reporting ms but then goes on to describe how he works it out:

If your audio application or soundcard provides a readout of buffer size in samples instead, it’s easy to convert this to a time delay by dividing the figure provided by the number of samples per second. For instance, in the case of a typical ASIO buffer size of 256 samples in a 44.1kHz project, latency is 256/44100, or 5.8ms, normally rounded to 6ms. Similarly, a 256-sample buffer in a 96kHz project would provide 256/96000, or 2.6ms latency.

He needs to double everything, at least!

His calculation is right, but he is looking at it in Windows ASIO terms of a total buffer setting, not Linux ALSA terms of blocksize and periods. For example, in Windows an ASIO driver control panel might have a buffer setting of 256 samples, which should be the ALSA settings equivalent of 128 blocksize and 2 periods. Total samples of the buffer / sample rate does give the total reported latency in milliseconds though, or in ALSA terms, (blocksize * periods) / sample rate = total reported latency. But reported latency is often wrong. Last I checked for my RME card, reported latency was correct in Windows and wrong in Linux, but actual measured latency was identical. But this varies by audio device and driver.

Try changing the periods for ALSA in Linux and watch the reported output latency in Reaper change accordingly.

It’s definitely wrong, a 256 ASIO buffer is 256 input and 256 (until first sample) output. Look at the samplesblock variable in a JSFX to see it match the buffer size, a 256 buffer results in samplesblock being 256 samples and a DAW can’t process a block while gathering the samples or output them while processing them. Or just look at how RTL is always at least 2x buffer size.

The entire thing about measuring RTL is that the time reported by drivers is often wrong, sometimes very wrong so we can’t compare the latency of Interface A and B by looking at buffer sizes.

Have a look at the thread to the forum I posted. No interface is anywhere near 1.5ms for 44.1kHz @ 64 samples, because it’s bare minimum double that*. From a scan I see anything from 3.6ms to 11ms.

* + AD/DA latency

That has not been my experience, making many measurements over the years. Being a little nutty about too much latency I have posted measurements on the cockos forums many times I’m sure. 256 buffer size / 44100 sample rate = 5.8ms reported. Actual measured latency may be different of course.

Are you using a loopback test and measuring double or more the buffer size for your device? This is under Windows or Linux?

Or damn, am I losing it. I haven’t messed with any of this in sooo long.

Yea Snookoda, I’m officially fucking losing it. What happened to PCI/e audio cards? - Page 3 - Cockos Incorporated Forums

No worries, you had me doubting myself there too for 2,880,000 samples @ 48kHz. :smiley: