Post-jam reflection
Before I'd heard about this game jam, I had no intention of using my Lisp for anything besides doing coding exercises. So it sent me on a wild journey from which I'm still catching my breath, but I couldn't be more pleased with the results.
I spent significant time during the past month or two preparing the plumbing, thinking about how I wanted to do sound and graphics. And I had a pretty cool idea for a game, but at a certain point mid-week I had to make a decision to either do a crappy job with everything (but still learn a lot), or to focus on the music part and try to do it well, even if it meant not fulfilling the requirements.
When the jam started I had decided to use SVG and the Web Audio API and done some basic tests, but the audio engine was still in its infancy. I had the triangle working (because it's my favorite sound in the world) and had implemented the noise channel with the LFSR algorithm. But I wasn't building the instrument channels into proper audio buffers, had no ability to export audio, and I hadn't yet implemented the pulse waves. I also wanted to at least make a couple of decent tracks with it, which consumes a lot of time and energy.
If there's one thing that I learned from building this, it's that Chiptune synthesis is not nearly as complicated as I thought. Or maybe... it just doesn't need to be.
Now, this is like coming back around full-circle from when I first started learning about this stuff. As with any new hobby, you tend to go into it very naive and at a certain point you just feel like a total idiot. The moment that this really hit me was when I learned about aliasing, and the different types of bandlimiting in order to mitigate it.
But somehow, I managed to sneak in the back door, so to speak, and not have to deal with any of that. How? By not ever sampling anything. Instead, the audio buffers are simply calculated.
For example, the 25% duty cycle pulse wave is represented like this:
[1, 1, 1, 1, 1, 1, -1 -1]
When I looked up on Wikipedia how to calculate a pulse wave, it said that the common way is to take a sawtooth, and subtract it from a phase-shifted version of itself. But I thought, "Why would we need to do that?". We already know what the resulting values need to be, 1 or -1! That's it, it couldn't be anything else! So all we should need to do is determine which value it should be for any given sample as we build the audio buffer. The formula that I used is simply this:
Math.floor(i / (1 / (freq / (sampleRate / 8)))) % 8
The whole time I was implementing it I thought maybe I was kidding myself and at some point reality would hit me that it couldn't be that simple. But my ears don't lie to me, and once it started coming together it sounded so much better than I was expecting it to.
A similar thing happened when I had to implement the mix function. Surely it couldn't be as simple as just taking the input buffers and adding them together. But that's actually all it is! It sounds great, and it's lightning fast.
Next steps
There are still a lot of possible directions to take this project. It's really just a skeleton of a Chiptune tracker, and I like that, but I have several ideas in mind for next steps. Some are small, others more ambitious.
- I'd like to implement an oscilloscope visualizer so you can see the various waveforms as they are playing. I'm looking at the corrscope project for inspiration, but I want it to be pretty colors like rusticnes.
- It would be really cool to be able to interface with existing NES music, whether it be importing music from games, or saving them as NSF files like Famitracker does. The former is a rather large project in itself, because it necessitates actually writing an emulator. The songs are written in 6502 assembly, with no standard format. The only way to get note data out is to actually run the code.*
- There is supposed to be another channel for playing samples, which I have kind of purposely neglected because I don't really care about them... to be perfectly honest, one of the biggest reasons why I love composing for the NES is its utter simplicity. It means I can focus on composition and not sound design. However, there are many classic games that are known for their samples and their sound has a strong association with the genre. I also have to consider that I might not be the only person who might want to use the thing.
- I really skimped out on the noise channel, which is supposed to be playable at 16 different pitches. It's also stuck at one fixed volume envelope, a linear taper which works well enough for many songs, but is limiting nonetheless.
- For that matter, *all* of the volume levels are stuck, whether it be the envelopes of the individual notes, or the different channels in relation to each other. I just kind of got lucky by picking a couple of songs which were simple enough that it didn't matter.
- The good news is I already planted the seed for an effect system with the vibrato, which can be extended by expanding the API to support other parameters.
- Alright, I admit that I'd be lying if I said I didn't wish the interpreter was faster. But it's just so simple and I love that about it, so I'm not sure how to go about addressing that. Though the idea of writing a compiler is intriguing...
* I did ask this question on the NES Dev Discord, and got the following response from a respected member:
Ultimately, a sufficiently clever tool could probably figure out "this write to the sound register was caused by this memory, which was caused by this memory, which was caused by these memories, which were caused by these memories", and separate envelopes and score and re-convert that into just a score
What else am I missing? What would entice you to want to use this to make music? Let me know, I'd love to hear from you.
Get NES Music Engine
NES Music Engine
Lisp Chiptune editor
Status | Released |
Category | Tool |
Author | bobbicodes |
Leave a comment
Log in with itch.io to leave a comment.