SynthMaker has a few key concepts which it helps to understand, terminology used can be confusing at first so here is a guide.
Stream data runs at audio rate, while ‘green’ or GUI data currently is executed when a trigger “flows through” the affected section of the schematic. This means that green connection can both have triggers and values fed through them simultaneously. For example, a trigger is generated when a green float value changes.
The terms ‘poly’ and mono are used many times in synthesis, but the terms are used slightly differently in SM.
(Note: read more about the differences between Poly, Mono and Mono4 in the SM User guide)
The building blocks of SynthMaker projects are referred to collectively as components. Components are of two types: primitives and modules. Primitives are the lowest level within the program. Modules are collections of primitives and other modules organized together into a higher-level building block that includes input and output primitives for the module to communicate with other levels in a schematic.
SynthMaker has components to extract MIDI note events for processing by other modules but it also has primitive components to extract or send raw MIDI data (excluding system exclusive or SysEx data which are currently not supported). MIDI (excepting SysEx) consists of a status byte and two data bytes. SynthMaker’s MIDI Split and MIDI Event components separate the status byte into two integer values. The first is for the type of event and the second is for the MIDI channel. The integer value for the status byte will be that for MIDI channel 1 as the second ‘nibble’ has been dropped.
SynthMaker’s high-level modules communicate frequency values in Hertz (Hz) or cycles per second but its low-level primitives use a normalized frequency range of zero to one where one is equal to the Nyquist frequency – which is the highest frequency supported by the selected sample rate and which is equal to one-half the sample rate. There are stock low-level modules to convert between these systems and the MIDI to Poly module has a property setting that allows you to select which frequency value system it outputs.
“Oh-oh! you’ve got denormals!" Denormal values are very small magnitude floating point numbers that cannot be calculated in the same way nor, therefore, at the same speed as ‘normal’ values. For sound applications these values are so small and to be generally considered meaningless but the floating point operations used by the program still support them. So to avoid performance degradation you need to watch out for situations where they can be inadvertently introduced and implement some means of removing them.
So how did they get in there anyway? Many DSP techniques involve feeding back delayed values of an audio signal and mixing them with the current values. If a feedback loop is going to be stable then an impulse coming into it must be attenuated with each iteration through the loop. (Translation: each time it gets fed-back it’s a little smaller.) So if that continues you should expect that, without new signal coming in to stop it from doing so, the remnant of the old signal will eventually get so small that denormal values are introduced.
How to avoid it?
1) Anytime you introduce a feedback loop where denormals are possible you can use the technique sometimes called “ellimination by quantification” whereby you add a small but normal floating point number to each sample value of sufficient magnitude that it completely swamps the tiny value of the denormal; which is lost addition because it’s too small compared with the normal value to leave any trace in the result. Then you subtract the same value back out returning you to the original value if it’s a normal number or to zero if it is not (at least in theory and it’s close enough for out purposes here).
2) Add a small amount of random noise many decibels lower than even the smallest usable signal and feed it through the whole audio chain. Then whenever a feedback loop would otherwise risk going into denormals the small normal values coming in through the signal path will prevent it.
Which is better? That depends on how many potential denormal causing components your project has. If you have just one it’s best to use quantization; if you have dozens then it’s likely better to add noise. In between you’ll have to pick.
How do you know you have them? For a VST effect very high CPU usage after the audio signal has died off from audible levels that reduce when a new signal is provided. In an VSTi they will similarly show after all sound dies down but may also be present when one channel of a poly stream falls ideal even though other notes are still played. The general rule is worse performance after playing means denormals are the likely culprit.