Modeling question- square wave reproduction

Mowgli

Tele-Meister
Joined
Feb 18, 2021
Posts
407
Location
Southern Jazzville
Using Fourier mathematics one can reproduce a square wave with a series of sine waves. At the “higher frequency” areas of the square wave (i.e. the sharp corners) good reproduction requires a huge number of high frequency sine waves for accurate reproduction. This is where my understanding of modeling precision hits a speed bump so I’m hoping someone can expand my knowledge. It’s been decades since I studied this topic.

The sampling rate for any accurate reproduction must exceed the Nyquist frequency by 2x, if I recall correctly. Otherwise the limited bandwidth is exceeded and you get aliasing or truncation errors. So good samples prior to D to A conversion via FFTs (fast Fourier transforms) must contain enormous data sets where high frequency information is present.

So do modelers involve real time sampling of data (AtoD sampling creating finite voltage value data sets) followed by some sort of data interpolation based upon manipulation of that sampled data set by an imprint algorithm? And all of this occurs prior to DtoA conversions?

That seems incredibly data and processor intensive. Plus, given the finite data sets that have to be acquired, I have difficulty understanding how clipped waves, specifically the higher frequency information located at the corners, can be accurately reproduced without artifacts and latency.

I feel there’s a gap (or ten) in my knowledge base here so any help with this understanding would be appreciated.

And one additional thought. I’ve read a few posts here and elsewhere about people selling their analog gear and adopting this new modeling technology exclusively as if this is the final frontier of electric guitar amp technology. Suddenly it occurred to me; what if a totally new and different technology emerged - something totally unforeseen - that rendered the present modeling technology obsolete. Would they regret selling off all of their analog gear if new technology was also based upon analog amplifiers as the basis of the new technology and their modeling data couldn’t be imported by this new technology?

After all, tube users didn’t foresee the discovery of solid state devices, for example. Just a thought.
 

kennl

Tele-Afflicted
Joined
Feb 6, 2007
Posts
1,957
Location
Moon Township, PA
That seems incredibly data and processor intensive. Plus, given the finite data sets that have to be acquired, I have difficulty understanding how clipped waves, specifically the higher frequency information located at the corners, can be accurately reproduced without artifacts and latency.

modelers all have some latency
 

philosofriend

Tele-Afflicted
Joined
Oct 28, 2015
Posts
1,042
Location
Kalamazoo
"So do modelers involve real time sampling of data (AtoD sampling creating finite voltage value data sets) followed by some sort of data interpolation based upon manipulation of that sampled data set by an imprint algorithm? And all of this occurs prior to DtoA conversions?" This is all correct, though I'm not sure what you mean by interpolation. The computer doesn't need to guess what the air pressure was between two samples. If it needed that it would just sample at a higher rate.

In Bob Katz's book "Mastering Audio" he goes into detail of how digital audio can be done really right and how it is often done really wrong. It helps to go higher than 2x the highest frequency because there need to be filters to reduce ailiasing, and real filters do not act as perfect brickwall devices. It helps to sample at high rates and it helps to use more bits on each sample. Dithering has to be applied correctly. So you are right, good sound is data and processor intensive. The engineering must be done by folks who care who want to sell items at a higher price than the cheapest available. Digital audio can be engineered well or poorly. The people who do it well have those square corners under control, with different strategies to reduce latency. The artifacts merely have to be inaudible, we're not dealing with pure theory for beings of infinite hearing. We're just trying to make a Katana that sounds like a real (lo-fi) guitar amp.

One way of conceiving a square wave is as a sum of sine waves that form a series higher and higher in frequency until you can't hear them or real speakers can't reproduce them. If the signal is processed in the frequency domain it will lose some coherence in the time domain and the timing of the different ever smaller sine waves will not be coordinated to make as square of corners as the unprocessed signal had. The general concensus is that humans can't hear actual waveforms, that our ears operate by putting together data of the amplitude (but not the exact timing of) different frequencies. The hairs in the inner ear are tuned to frequency bands. As much fun as it is to think about perfect square waves, are ears don't seem to be able to tell if the corners are perfect. Not everyone agrees with this. Most radio stations have a "phase rotator" in their processors that is supposed to restore processed signals to something more resembling real musical instruments. For reasons that I do not understand these phase rotators go ape**** on digitally overloaded signals (like snare hits on most modern music) and make it sound worse. Katz's book talks about all of this, most of it is over my head other than the conclusion that good digital equipment sounds way better than bad.
 
Last edited:

Mowgli

Tele-Meister
Joined
Feb 18, 2021
Posts
407
Location
Southern Jazzville
Philo - Thanks for the details.

It sounds like there are "workarounds" for dealing with the "non-essential" high frequency data such that more "digestible" data sets are presented to the processors. This makes a good deal of sense. Why process data that can't be heard?

"Interpolation" in this context meant an "alteration" of the original sampled digital data set by adding/subtracting values to the original values (i.e. where the "modeling" really occurs by the insertion/replacement of data - hence the modification of the original data set). For example, a 2-D data set where discrete voltage amplitude is present on one axis and time on the other axis. Altering the amplitudes would be a way to model the sound if a characteristic sound imprint could result from an algorithm where data value alteration yields that desired sound after processing.

In a totally different example, certain data sets acquired during MR Imaging can be altered. For example, one can "add zeros to the periphery" of a set of acquired data (imagine a 2-D register of values). The formatting of this acquired data is referred to as k-space. In 2-D k-space consisting of frequency and phase-encoded data coordinates, the "peripheral" data, which is of very low amplitude relative to the central data, contributes little to the overall image when processed by a 2-D FFT (fast fourier transform)... but it adds a little noise to the entire image while simultaneously increasing the so-called "spatial resolution" of the image due to the peripheral addition of zero-value data. It's a trick or workaround to increase spatial resolution at the expense of SNR (signal to noise ratio). This technique is called "Zero Fill Interpolation" because zero values are added to the data set. There are always trade offs to these "tricks."

I've got a great deal to learn about processing of digital audio. Sometimes learning the theory can allow you to troubleshoot problems so this is why I try to learn theory, too.

Thanks again for your input.
 

radiocaster

Poster Extraordinaire
Joined
Aug 18, 2015
Posts
9,921
Location
europe
Just want to point out that analog gear, be it fuzz pedals or even real synths, often does not have that close to perfect square waves.
 

Blrfl

Tele-Afflicted
Joined
May 3, 2018
Posts
1,971
Location
Northern Virginia
Using Fourier mathematics one can reproduce a square wave with a series of sine waves. At the “higher frequency” areas of the square wave (i.e. the sharp corners) good reproduction requires a huge number of high frequency sine waves for accurate reproduction.

I can short-circuit this discussion a bit for you: Almost everything in this class of application models things done in an analog circuits, and those are time-domain creatures. The frequency domain, Fourier and his transforms are rarely in the picture.

In the time domain, you can represent a near-perfect square wave by generating alternating strings of full-positive and full-negative samples with the number of each depending on the frequency and sample rate. (I say near-perfect because, at any given sample rate, not every frequency lines up with an even number of samples. But it's a lot closer than you'd ever get in the frequency domain with a very large number of very small bins.) All of the high-frequency content is there and it stays in a compact form.

So do modelers involve real time sampling of data (AtoD sampling creating finite voltage value data sets) followed by some sort of data interpolation based upon manipulation of that sampled data set by an imprint algorithm? And all of this occurs prior to DtoA conversions?

The term "imprint algorithm" is unfamiliar to me but yes, in general, that's how it works: inhale it into digital, process it to taste and burp it back out as analog.

Conversion to or from analog at the ends is a necessary evil because the world is analog. That process (in either direction) is entirely analog and therefore subject to the limitations of the components involved (e.g., the corners of a square wave won't be perfectly represented because the op amps involved have less-than-ideal slew rates and settling times).
 

unixfish

Doctor of Teleocity
Silver Supporter
Joined
Apr 20, 2013
Posts
16,306
Location
Northeast Ohio, USA
It sounds like the answer here is compute power. Put an i7 or an i9 processor in an amp, or maybe more, and take advantage of all those threads to parallelize producing the sine waves to become as square as possible. That would probably enable the sounds, but at a high cost with lots of heat. Hmmmmmm...
 

zhyla

Tele-Meister
Joined
Aug 30, 2013
Posts
215
Location
Northern Hemisphere
So do modelers involve real time sampling of data (AtoD sampling creating finite voltage value data sets) followed by some sort of data interpolation based upon manipulation of that sampled data set by an imprint algorithm? And all of this occurs prior to DtoA conversions?\
I'm sure there is a variety of approaches but I think the answer to this question is no, the modelers available today are not doing full amp circuit simulation.

One way models are created is by doing sweeps of tones and controls and measuring the output -- similar to the Kemper Profiler but in more of a lab scenario.

I'm sure some models are built mathematically based on knowledge of the circuit and measurements.

That seems incredibly data and processor intensive. Plus, given the finite data sets that have to be acquired, I have difficulty understanding how clipped waves, specifically the higher frequency information located at the corners, can be accurately reproduced without artifacts and latency.

I think you're looking at this wrong. DSP implementations are fundamentally software. In software nearly everything is possible and there are often shortcuts (lookup tables, mathematical approximations, hardware acceleration, etc) to solve computation intensive problems. And, as @Blrfl pointed out, in software a square wave is not produced by an infinite series of sine waves, it's just a square wave.

I've played around a bit with this stuff a little bit (I built a DSP pedal) and there definitely are limitations. I really wanted to build a convolution-based reverb pedal so I could capture impulse responses of interesting rooms and use them in a pedal. That's prohibitively expensive in terms of RAM for the hardware I was using.
 

Mowgli

Tele-Meister
Joined
Feb 18, 2021
Posts
407
Location
Southern Jazzville
I think I get it now. Very simply put, after the sample undergoes ADC, the various modeling algorithms will generate the numbers for the various attributes of the desired sound and add and/or subtract them to the sampled data and then output with DAC. This allows them to represent any waveform.

Any imperfections due to the inherent limitations of the bandwidth (sampling rate), quantization, “relatively” slow rise times (slew rates) of the processors , etc. may or may not be audible depending upon the peculiarities of the sampled data and the particular modeling algorithm.

Many years ago I read about impulse responses in an Electronic Musician article and it made sense. I just wish I could remember it; it was ~ two decades ago!

It’s all certainly progressed from the old analog Moog synth days of unstable VCOs, VCFilters, Envelope generators, etc. I recall when that was the vanguard. Time for my nap.

Thanks for the clarifying responses.
 

raito

Poster Extraordinaire
Joined
Nov 22, 2010
Posts
6,674
Location
Madison, WI
Get yourself a copy of Richard Hamming's book on digital filtering, or watch his lectures and all will be revealed.

Beyond the Nyquist frequency, the information is lost (or aliased to the point to being useless). There is no free lunch, and nothing short of outside information can bring it back. That cuts down how much data you really need a lot.

Some ADC ICs have built-in lowpass filters to keep the aliasing at a minimum by diallowing frequencies above Nyquist. OK, they just attenuate them, as there isn't a true brick wall filter.

While we can use the method above for generating square waves (I do it myself), I really doubt that any modeler does it. Maybe for effects.

I disagree very much that Fourier, etc. aren't used. If I had to guess what's happening when circuit emulation is not used, I'd say that there's a bunch of transfer functions going on to emulate the target amp indirectly. Probably non-linearly. It's heavy processing, but not all that involved.
 

Blrfl

Tele-Afflicted
Joined
May 3, 2018
Posts
1,971
Location
Northern Virginia
Beyond the Nyquist frequency, the information is lost (or aliased to the point to being useless). There is no free lunch, and nothing short of outside information can bring it back. That cuts down how much data you really need a lot.

Fortunately, guitar systems don't have to work at high frequencies. The frequency of the high E at the 20th fret is 1,047 Hz, so figure 16 kHz to be a reasonable cap. A system sampling at 48 kHz is at 1.5 times the Nyquist limit and for 96 kHz, it's three times. That leaves a decent amount of headroom before things go bad, more if you consider that a speaker/cabinet model will skim off everything above maybe 8 kHz.

While we can use the method above for generating square waves (I do it myself), I really doubt that any modeler does it. Maybe for effects.

Effects would be pretty much the only place it needs to be generated, and even that's probably limited to tremolo. It's worth noting that a hard-clipped sine wave converges on a square wave as the amplitude increases, which you'd see that happening in a model of a solid-state distortion pedal. I don't imagine that trying to shoehorn that into the frequency domain would work out very well.

I disagree very much that Fourier, etc. aren't used. ... I'd say that there's a bunch of transfer functions going on to emulate the target amp indirectly. Probably non-linearly. It's heavy processing, but not all that involved.

Modeling entire amps as black boxes has been tried; with the exception of Kemper, nobody's come up with results that were anything to write home about. I think a lot of early attempts made people who heard them decide that modeling is and will always be awful.

The big leaps in model quality have been almost entirely a result of component modeling. None of that technology is new and all of it has been proven over many decades as an engineering tool in the form of circuit simulators. What's new is that the processing budgets suitable for consumer products have become big enough to do a pedalboard and amp's worth of it in real time.

The models for discrete components and solid-state semiconductors are all time-domain. There's a small community of people modeling tubes in LTSpice; I've seen some of their models and those, too, are time-domain. I can't think of anything in a modeler other than the tuner that needs to operate in the frequency domain. (Don't get me wrong; that stuff has its place but I don't think this is the application.)
 

klasaine

Doctor of Teleocity
Silver Supporter
Joined
Nov 28, 2006
Posts
10,016
Location
Los Angeles, Ca
Slightly tangential but these just hit the market ...




I've ordered all 3 as my 60th bday present to myself.
 




Top