Forums

Sega Master System / Mark III / Game Gear
SG-1000 / SC-3000 / SF-7000 / OMV
Home - Forums - Games - Scans - Maps - Cheats - Credits
Music - Videos - Development - Hacks - Translations - Homebrew

View topic - Linear interpolation/synthesis tutorial

Reply to topic
Author Message
  • Joined: 12 Nov 2005
  • Posts: 54
Reply with quote
Linear interpolation/synthesis tutorial
Post Posted: Fri Sep 12, 2008 10:55 am
I've been working on a new tutorial that shows a clean and efficient way to emulate sound chips with moving average filtering (linear interpolation):

Fast linear resampling of waveforms tutorial

linear_resampling.zip (example code)

I'd like to make sound emulation easier for people, and converting from the chip's clock rate to the output sample rate seems to be a main stumbling block. Note that this doesn't cover more complex FIR-based bandlimiting, as linear interpolation works well for many uses and I want to keep things really simple for anyone to understand fully.

The tutorial starts at basic waveform synthesis without any interpolation, then builds from that one step at a time. I wrote some simple portable SDL-based interactive waveform graphing code examples to go along with it, so you can see actual code and the result, and modify it to see how that affects things.

I'd like to improve the tutorial, so give any feedback or even critique if you think it's unnecessary or too complex or whatever. I got a little tired near the end after working several days on it, so the last sections don't have as many pictures as I'd like.
  View user's profile Send private message Visit poster's website
  • Site Admin
  • Joined: 19 Oct 1999
  • Posts: 14770
  • Location: London
Reply with quote
Post Posted: Fri Sep 12, 2008 11:51 am
There's a typo where you say 1000 when you mean 1000000.

I find it hard to figure out how to get from what you describe to a working synth. I don't claim that my implementation is good, let alone understandable, but I don't see how implementing things as sums of differences helps matters. The extra layer of abstraction between the output samples and the chip emulator seems to confuse matters.

I think the gist of it is that the chip emulator outputs differences and in the resampling stage you simply add some fractional part of these differences as appropriate to produce the output sample. So are the differences being generated at the "real" clock rate, and converted to samples at the output rate?

The diversion into fixed point maths seems to confuse matters even more. In my experience, fixed point is confusing enough that you generally want to implement it as floating point first, and convert it after you're happy that it makes sense because the conversion obscures the details. Then again, my experience with fixed point is not great. I did a lot of work to make my PSG emulator run faster on the GBA, but I'm certainly no expert.
  View user's profile Send private message Visit poster's website
  • Joined: 26 Aug 2008
  • Posts: 292
  • Location: Australia
Reply with quote
Post Posted: Fri Sep 12, 2008 12:56 pm
I agree with maxim mostly. It is good in parts, but lacks a form of cohesion that would allow someone to go from start to finish. Why is differences any faster/better? You still need to output a wave and read from the original buffer the same amount... so no idea why you go with the "differences" route.

One issue with averaging is it can produce some bad aliasing, maybe not as bad as in some methods but still. I mean if you have something which is updating at 30000hz you can't reproduce that unless you have a >60000hz playback, averaging in those samples affects the output somewhat. Most tv speakers would have an upper range of 20000hz, what happens when a game has a 55khz note? When you average in samples for downsampling, you're capturing some of those noises you can't represent properly, which affects your signal. What needs to happen is to remove those "frequencies", which is easier said than done.

Most emulators seem to "limit" frequencies to below 22000hz, however none of them take into account the volume changes in regards to aliasing.... and since most emulators think they happen immediately it can affect the output I have noticed.

If you could write a detailed post on a very easy to implement low pass filter that is near perfect then it would be very interesting.... I like what you have done so far it just needs a bit of revision. :)
  View user's profile Send private message Visit poster's website
  • Joined: 12 Nov 2005
  • Posts: 54
Reply with quote
Post Posted: Fri Sep 12, 2008 1:13 pm
Maxim wrote
I find it hard to figure out how to get from what you describe to a working synth.

I'm going to post more example code that actually plays a continuous square wave (via SDL), and a very simple standalone library that implements the method. I was hoping the tutorial wouldn't fall so short.

Quote
I don't see how implementing things as sums of differences helps matters. The extra layer of abstraction between the output samples and the chip emulator seems to confuse matters.

I think it makes things much simpler. Consider this square wave emulator code:

int amp;      /* current amplitude */
int time;     /* time of next amplitude transition */
int sign = 1; /* +1 or -1 */
int period;   /* clocks between amplitude transitions */
int volume;   /* volume of wave */

/* Runs square wave from wherever it was to clock_count */
void run_square( blip_buffer_t* blip, int end_time, int period, int volume )
{
    /* Run as long as time is within duration */
    while ( time < end_time )
    {
        /* Calculate new amplitude and find delta from previous */
        int new_amp = volume * sign;
        int delta   = new_amp - amp;
        amp = new_amp;
       
        /* Add delta to difference buffer */
        blip_add( blip, time, delta );
       
        /* Negate sign */
        sign = -sign;
       
        /* Advance to time of next phase transition */
        time += period;
    }
}


When the emulated CPU is going to write to a sound register, it FIRST runs the square wave to the present emulated time, then does the write. blip_add() handles synthesis, you just give it the time (in clocks) and delta. Every video frame, the emulator would run the channels until the end of the frame, then read the samples out of the buffer and play them back. This makes the sound emulation code very clean, in my opinion. I'll have to get posting that final demo code.

Quote
The diversion into fixed point maths seems to confuse matters even more. In my experience, fixed point is confusing enough that you generally want to implement it as floating point first, and convert it after you're happy that it makes sense because the conversion obscures the details. Then again, my experience with fixed point is not great. I did a lot of work to make my PSG emulator run faster on the GBA, but I'm certainly no expert.

Yeah, that's one reason I left it for last. I was considering having a floating-point implementation, but the "5c-any_rate.cpp" code example seems clear enough as a stepping stone:

// Ratio for converting from clocks to samples
int const numer = 6;
int const denom = 125;

// Adds amplitude transition at specified position in buffer,
// where position is at source rate, not output rate.
void add_delta( short out [], int clocks, int delta )
{
    // Multiply by numerator of conversion factor
    int num = clocks * numer;
   
    // Separate whole and fraction
    int whole = num / denom;
    int fract = num % denom;
   
    // Divide delta between two difference samples.
    int second = delta * fract / denom;
    int first  = delta - second;
   
    // Add to buffer
    out [whole  ] += first;
    out [whole+1] += second;
}


I think that going through the code examples would make things clearer. They cover a lot of implementation details that I didn't cover in the text; I wanted to focus on the core concepts in the text, and let the code more precisely detail things.
PoorAussie wrote
One issue with averaging is it can produce some bad aliasing, maybe not as bad as in some methods but still.

Oh, I agree that averaging isn't nearly the best one can do, but on this board anything more than averaging is usually discounted, partly because it's more complex to implement. I think averaging is a great starting point, since it's much better than nothing at all, and very easy to understand.

If you'd like to discuss more complex methods, let me know (or just start a thread). I'd be happy to help with implementing it. This thread should focus on this tutorial and averaging.
  View user's profile Send private message Visit poster's website
  • Site Admin
  • Joined: 19 Oct 1999
  • Posts: 14770
  • Location: London
Reply with quote
Post Posted: Fri Sep 12, 2008 2:28 pm
Of course proper subsampling requires filtering of frequencies above the Nyquist limit; I don't mean to sound disdainful of that. However, for the SMS, 44kHz imposes a Nyquist limit above the highest frequency it can output on real hardware anyway (due to low-pass filtering) so primary aliasing (ie. of the fundamental) doesn't really happen anyway. The argument for filtering out the high harmonics of square waves is the part I tend to disagree with.

Linear averaging without filtering makes a big difference to the quality and avoids pretty much all of the primary aliasing. So it's good to get that explained and will help avoid some of the worst side effects. Go listen to Aztec Adventure or Alex Kidd: The Lost Stars in Dega for example.

Secondary aliasing - due to the "sharpness" of volume changes and the high harmonics of the fundamentals of the square waves - tend to be more noticeable when the sampling rate is lower. It may also depend on the quality of the audio system - my cheap speakers from 1996 sound great to me (vastly better than most computer speakers of the era) but probably aren't up to "studio" quality by a long shot. So when I'm targeting 44kHz on a PC, I don't expect to be held to the highest of standards - if it sounds as good as MP3, I should be fine.

I'm open to suggestions, though. A difference-based system may well be a neat solution but it doesn't help me use off-the-shelf PCM-generating chip emulators (unless I apply a layer over them). I'm considering moving to a full-on libresample-based mixer for in_vgm in order to deal with the aliasing properly and allow all sound chips to be emulated at their "native" rate. This is likely to be rather more CPU-heavy though and lightweight alternatives do interest me. Overall I'd say that Blargg seems to have a distinctly above-average handle on sound theory which gives him great confidence in his opinions, but us lazy coders don't pay much attention. I'm time-poor as well as lazy so I won't necessarily be fast to respond but I'm happy to be convinced so long as the argument is thorough enough; examples also really help, but my own subjectivity may overrule matters.
  View user's profile Send private message Visit poster's website
  • Joined: 12 Nov 2005
  • Posts: 54
Reply with quote
Post Posted: Fri Sep 12, 2008 3:05 pm
First off, I added complete examples that play sound with mouse control, and that minimal library (and fixed that 1000->1000000 error in the text; thanks):

linear_resampling2.zip

I want to focus on averaging and this particular method. I feel that this method of synthesis itself offers significant advantages, both in readability of the sound emulation code, and efficiency. I'm going to examine your sn76489 sound emulation code and try rewriting it to use this method, keeping the original structure as intact as possible so they can be easily compared. I think it will be instructive. I'd like to limit comparison of linear averaging to other methods in this thread. I WOULD like to discuss that and how to most accurately emulate SMS sound (in another thread), but first someone needs to post some good recordings of the SMS to serve as the standard to judge our efforts.

Maxim wrote
Linear averaging without filtering makes a big difference to the quality and avoids pretty much all of the primary aliasing. So it's good to get that explained and will help avoid some of the worst side effects. Go listen to Aztec Adventure or Alex Kidd: The Lost Stars in Dega for example.

At least we agree on something. :) Like you, I've found that on average computer speakers I can't even tell; it's only with headphones and higher lone notes that it's very noticeable. That's why I've lightened up on linear averaging in the past year or two.

Quote
I'm open to suggestions, though. A difference-based system may well be a neat solution but it doesn't help me use off-the-shelf PCM-generating chip emulators (unless I apply a layer over them). I'm considering moving to a full-on libresample-based mixer for in_vgm in order to deal with the aliasing properly and allow all sound chips to be emulated at their "native" rate. This is likely to be rather more CPU-heavy though and lightweight alternatives do interest me.

I'd be interested in helping design a lightweight framework that allows various quality levels from the SAME source code. For the lowest quality, the code should be very easy to follow from emulator to sample output, so that there's no "voodoo". Only for the highest quality should there be more complex stuff, but it should still behave the same as the simpler stuff, so that it's clear where things are happening.

Quote
I'm time-poor as well as lazy so I won't necessarily be fast to respond but I'm happy to be convinced so long as the argument is thorough enough; examples also really help, but my own subjectivity may overrule matters.

Thanks for the offer. The bar is high, but at least it's there if I want to make my case. I do realize that reimplementing things that already work requires justification, so that's one reason I'm trying to make a tutorial that will help those writing new sound emulators.
  View user's profile Send private message Visit poster's website
  • Joined: 12 Nov 2005
  • Posts: 54
Reply with quote
Post Posted: Sat Sep 13, 2008 10:28 am
I've done the conversion of sn76489 to use the linear averaging method described, as implemented by the tiny blip library (in C). The archive contains the original code, an intermediate version, and the end version. First, some concepts to make discussion clearer. In an emulator, the sound hardware's usage boils down to the following:

1. Write to some register
2. Run the sound emulator for some amount of time
3. Read any new samples that have been generated (usually done once every 1/60 second or so)

In the sn76489 code, SN76489_Write() does #1 and SN76489_Update() does #2 and #3 (running sound generates samples immediately). The synthesis loop is built around samples. For each output sample, it calculates and outputs the current amplitude to the sample buffer, then runs the sound hardware for however many clocks occur during that sample.

The rewritten version using blip is built around #2, running the sound emulator for some amount of time, with hardly any consideration for output samples. Fundamentally, the sound emulator outputs a waveform that can be specified by the times where it changes amplitude, and the new amplitudes it changes to. The blip version has a "run channel for this many clocks" routine which generates whatever transitions occur within that duration. It first calculates when the next transition will occur, then goes into a loop that adds that transition to the delta buffer at the calculated time, then calculates when the next transition will occur and continues the loop. Once the next transition's time is beyond the number of clocks to run, it calculates how many clocks beyond and stores that in the channel's ToneFreqVal. So once it's done, all the registers look as they would in the original code.

The delta buffer needs to know when transitions occur. It does this with a clock count, where 0 is the beginning, and greater values are later. Since sound runs potentially forever, this clock would overflow an integer if nothing were done. To avoid this, the count is reset periodically. I call this concept a "time frame". Synthesis occurs within a time frame, where clock 0 is the beginning, and larger values are later. The sound emulator runs the channels, which add transitions at various clock times within the frame. Once this is done, the time frame is ended. When ended, a clock count is specified. This is the number of clocks that were synthesized. Ending a time frame does two things: It makes the output samples for that frame available for reading out. It also begins a new time frame at the end of the old one. If the time frame's length was L clocks, then what would have been at time L before ending the frame is now at time 0 in the new time frame.

Here's a concrete example. Amplitude changes are handled by a helper function, for clarity.
int volume = 10000;
int phase  = +1;  // toggles between +1 and -1
int period = 100; // clocks between transitions
int delay  = 0;   // clocks until next transition
int amp;          // current amplitude in delta buffer

// Updates amplitude in delta buffer. Call when it might have changed.
void update_amp( int time )
{
    int new_amp = phase * volume;
   
    int delta = new_amp - amp;
    if ( delta != 0 )
    {
        amp = new_amp;
        blip_add( blip, time, delta );
    }
}

void run_square( int clocks )
{
    int time = delay; // time of next transition
   
    // in case volume was just written to
    update_amp( 0 );
   
    while ( time < clocks )
    {
        phase = -phase;
        update_amp( time );
        time += period;
    }
   
    // now time is beyond clocks
    delay = time - clocks;
}

// Runs sound for specified number of clocks
void run_sound( int clocks )
{
    run_square( clocks );
    // other waves ...
   
    // End time frame and make its samples available for reading.
    blip_end_frame( blip, clocks );
}

The emulator just calls run_sound() for however many clocks are needed, then continues on its way. When run_square( n ) runs, it's generating any transitions that fall within the time frame 0 to n. Later, when samples are needed, the emulator just calls blip_samples_avail() to find out how many are ready, and blip_read_samples() to read as many as are desired. Any unread samples are left in the buffer for reading later. The sound emulation code doesn't have to concern itself with samples at all, and as a bonus it now has clock accuracy with regard to code which does things like manual PCM by constantly writing to the volume register.

As for the conversion, I started with the original sn76489 code from in_vgm 0.35, then made slowly integrated blip usage into it.

The first main change was to use a delta buffer for output. I did this in SN76489_Update(), and had it add each channel individually each time through the loop. Then I changed the "clocks per sample" to 1, so that the effective sample rate of the original code was the PSG rate (/16). This meant NumClocksForSample was always 1 each time through the loop. That simplified the channel handling to decrementing the ToneFreqVals each time through, and flipping the flip-flop whenever it became zero. These changes eliminated all the old linear averaging and anti-alias code, leaving the core emulation code.

I then separated the channel code out of SN76489_Update(), into RunTone() and RunNoise(), so that they could be reasoned about independently. This change had each channel being run separately for the entire duration, rather than each being run for one clock at a time. At this point they just had a loop that ran the channel code however many clocks were needed.

The final change was to rewrite the channel loops to effectively run multiple clocks at a time, as the example run_square() does above. So the tone went from

/* Runs tone channel for clock_length clocks */
static void RunTone(SN76489_Context* chip, int i, int clock_length)
{
    int time;
   
    /* Update in case a register changed etc. */
    UpdateToneAmplitude(chip, i, 0);
   
    /* Run one clock at a time */
    for ( time = 0; time < clock_length; ++time )
    {
        chip->ToneFreqVals[i]--;
        if ( chip->ToneFreqVals[i] <= 0 ) {   /* If the counter gets below 0... */
            if (chip->Registers[i*2]>PSG_CUTOFF) {
                /* Flip the flip-flop */
                chip->ToneFreqPos[i] = -chip->ToneFreqPos[i];
            } else {
                /* stuck value */
                chip->ToneFreqPos[i] = 1;
            }
            UpdateToneAmplitude(chip, i, time);
            chip->ToneFreqVals[i] += chip->Registers[i*2] + 1;
        }
    }
}


to

/* Runs tone channel for clock_length clocks */
static void RunTone(SN76489_Context* chip, int i, int clock_length)
{
    int time;
   
    /* Update in case a register changed etc. */
    UpdateToneAmplitude(chip, i, 0);
   
    /* Time of next transition */
    time = chip->ToneFreqVals[i];
   
    /* Process any transitions that occur within clocks we're running */
    while ( time < clock_length )
    {
        if (chip->Registers[i*2]>PSG_CUTOFF) {
            /* Flip the flip-flop */
            chip->ToneFreqPos[i] = -chip->ToneFreqPos[i];
        } else {
            /* stuck value */
            chip->ToneFreqPos[i] = 1;
        }
        UpdateToneAmplitude(chip, i, time);
       
        /* Advance to time of next transition */
        time += chip->Registers[i*2] + 1;
    }
   
    /* Calculate new value for register, now that next transition is
    past number of clocks we're running */
    chip->ToneFreqVals[i] = time - clock_length;
}

The rewritten sn76489 emulator runs about 100-220% faster than the original, and the noise is now linear averaged. It could still be micro-optimized a lot; this just shows the basic speedup due to the rewriting the loop as shown just now.

With these changes, a FIR-filtered version of blip could be dropped in without any code changes, if one wanted more bandlimiting than linear averaging provides.
sn76489_blip_mod.zip (19.79 KB)
original and modified sn76489 emulator code

  View user's profile Send private message Visit poster's website
  • Joined: 06 Feb 2009
  • Posts: 110
  • Location: Toulouse, France
Reply with quote
Post Posted: Wed Mar 03, 2010 6:07 pm
Hi Blargg,
I was looking in your code back again after trying to use it the first time (got wrong pitch in some games so I abandonned the idea) and I think I found a bug with frequency calculation.

In:


static void RunTone(SN76489_Context* chip, int i, int clock_length)
{
   int time;
   
   /* Update in case a register changed etc. */
   UpdateToneAmplitude(chip, i, 0);
   
   /* Time of next transition */
   time = chip->ToneFreqVals[i];
   
   /* Process any transitions that occur within clocks we're running */
        while ( time <clock_length>Registers[i*2]>PSG_CUTOFF) {
         /* Flip the flip-flop */
         chip->ToneFreqPos[i] = -chip->ToneFreqPos[i];
      } else {
         /* stuck value */
         chip->ToneFreqPos[i] = 1;
      }
      UpdateToneAmplitude(chip, i, time);
      
      /* Advance to time of next transition */
      time += chip->Registers[i*2] + 1;
   }
   
   /* Calculate new value for register, now that next transition is past number of clocks we're running */
   chip->ToneFreqVals[i] = time - clock_length;
}


I think it should be:

/* Advance to time of next transition */
      time += chip->Registers[i*2];


Also it seems that UpdateToneAmplitude is called twice when timestamp = 0 which results in two opposite values being successively written at the same time (don't really know if that matters in the blip implementation)
  View user's profile Send private message
  • Joined: 12 Nov 2005
  • Posts: 54
Reply with quote
Post Posted: Thu Mar 04, 2010 3:46 pm
Argh, I think you're right about the frequency. I don't know how that got slipped in.

Are you sure UpdateToneAmplitude() being called twice is writing opposite values? It's a wrapper that keeps track of the current amplitude, and only writes a delta if it's changed since it was last called (so the rest of the code doesn't have to care about deltas, just absolute amplitude).

Let me know if you're interested in using this code, as that would justify more time to working on it again.
  View user's profile Send private message Visit poster's website
  • Joined: 06 Feb 2009
  • Posts: 110
  • Location: Toulouse, France
Reply with quote
Post Posted: Thu Mar 04, 2010 5:05 pm
I'm definitively interested in using it, for two reasons:

(1) later add better filtering/resampling using your other BLIP implementation (I probably would need some help for this)

(2) using chip native cycle count makes it easier to accurately synchronize chips writes from external CPU: when using sample based update functions, you have to interpolate the CPU cycle count in a number of samples, which is then interpolated by the sound chip emulator in a number of cycles to run the sound chip.

Quote

Are you sure UpdateToneAmplitude() being called twice is writing opposite values?


let's take for example a tone register set to N and imagine you are running exactly N chip cycles: flip-flop does not occur and frequency counter is set to 0.

The result is that the the next time you run the chip, UpdateToneAmplitude is called once (entry of the function) then flip-flop occurs immediately and UpdateToneAmplitude is called again with the same timestamp value (0), which I believe results in a delta of (2 x chip->Channels[i]) being added to blip buffer because of the sign change.

See, this is called in UpdateToneAmplitude/UpdateChanAmplitude:

int delta = buffer [j][0] - chip->chan_amp [i] [j];
      if ( delta != 0 )
      {
         chip->chan_amp [i] [j] = buffer [j][0];
         blip_add( chip->blip [j], time, delta );
      }

  View user's profile Send private message
  • Joined: 12 Nov 2005
  • Posts: 54
Reply with quote
Post Posted: Thu Mar 04, 2010 6:06 pm
I was actually going to suggest ditching my FIR resampler and using blip for everything recently anyway, because it makes synchronization of all the chips a breeze. I've stopped using my FIR resampler as well, due to this. We'll have to cover all the details in email...

Quote
See, this is called in UpdateToneAmplitude/UpdateChanAmplitude:

int delta = buffer [j][0] - chip->chan_amp [i] [j];
      if ( delta != 0 )
      {
         chip->chan_amp [i] [j] = buffer [j][0];
         blip_add( chip->blip [j], time, delta );
      }


Once that's called the first time, chip->chan_amp [i] [j] gets set to buffer [j][0], so the next time delta = 0. Or maybe I'm missing something, in which case you can yell at me for missing the point twice in a row now. :)
  View user's profile Send private message Visit poster's website
  • Joined: 06 Feb 2009
  • Posts: 110
  • Location: Toulouse, France
Reply with quote
Post Posted: Fri Mar 05, 2010 7:19 am
Blargg wrote
I was actually going to suggest ditching my FIR resampler and using blip for everything recently anyway, because it makes synchronization of all the chips a breeze. I've stopped using my FIR resampler as well, due to this. We'll have to cover all the details in email...


well, for YM2612, since internal updates are tied to the sample clock, it's easy to just run the chip at its original rate (1 FM clock = 1 sample = 24 * 6 VCLCK) then simply use FIR resampling at the end of the frame . Real FM chip probably runs at a lower granularity (internal clock is actually VCLK/6 with each operators/channel being synthetised sequentially) but no core emulates it at such accuracy (i.e they are "sample" accurate, not "cycle" accurate).

I thought that for PSG, since there isn't a "fixed" samplerate but rather multiple square waves at variable frequencies, using "real-time" resampling (or band-limited synthesis as you seem to call it) was more natural.

Still, I don't really know the difference in term of quality between your FIR implementation and Blip synthesis but I was under the impression it was very good compared to libsamplerate I used before (and a lot faster too).

Quote
]
Once that's called the first time, chip->chan_amp [i] [j] gets set to buffer [j][0], so the next time delta = 0. Or maybe I'm missing something, in which case you can yell at me for missing the point twice in a row now. :)


yeah, forget it, I was tired or something because there is indeed nothing wrong, if the flip-flop occurs immediately, the level is going to drop from +1 to -1 which need to be added in the blip buffer off course.

PS: fixing the frequency counter increment seems to have fixed the issue I had in some games.This was especially noticeable in games where FM notes and PSG channel frequency are supposed to be "in sync". A good example is Sonic 2 (first level) where the background PSG seemed "out of pitch" with the FM music, but when muting FM, it was not really noticeable. I don't really know how to better describe the effect.
  View user's profile Send private message
  • Joined: 06 Feb 2009
  • Posts: 110
  • Location: Toulouse, France
Reply with quote
Post Posted: Mon Aug 23, 2010 6:55 pm
Some more questions about SN76489 emulation:

(1) when the noise channel use the same frequency as tone channel #3, I noticed you would do this in the code:

  if (NoiseFreq == 0x80)
  {
    NoiseFreq = chip->Registers[2*2];
    chip->ToneFreqVals[3] = chip->ToneFreqVals[2];
  }


I don't understand why the noise counter should be reloaded with tone channel #3 counter value ? In theory, noise channel could have been running until then at its own frequency so its counter would be at a specific value, so shouldn't they rather be updated independently (noise channel still using tone channel #3 reload value though) ?


(2) has anyone verified what happen when writing the tone registers ? Do tone counters get immediately reloaded ? In theory, you could make a channel output a constant value by always writing a new tone value before the counter has expired. As it is emulated currently, counters are only reloaded when they decrements to zero.
  View user's profile Send private message
Reply to topic



Back to the top of this page

Back to SMS Power!