Chuck it, rude boy!
Been playing around with Chuck, a realtime audio programming environment, again. Got a bit further this time, i.e. as far as playing a sample and doing some other bits and pieces. Not very close to making anything that you could call music yet though.
I had the honour of being the first person to post to the mailing list, I hope they are responsive, as close contact with the developers is going to be pretty much essential given the rough and ready state of the documentation.
I found out that Chuck is a project of Perry Cook, one of the guys behind the Synthesis Toolkit (STK) which is a C++ API for sound making. I had a play with this last year. This is a good thing as STK is very powerful and contains lots of interesting audio potential. Chuck and STK are already integrated to a limited extent, but one would expect to see more STK stuff going into Chuck.
As regards the whole idea of programming music machines on-the-fly: it’s tricky! Even with some pretty frantic Vimming, there’s still a lot of time lost. Compare this to playing Ableton Live with keypresses and mouse scratching. I suspect that a successful programmer/musician will have to make a lot of use of high-level language features. Chuck’s .lex file seems to indicate that there is (or will be) support for functions and even classes. I wonder if it will be possible to create a class instance within the Chuck VM and modify it whilst it runs from other code. That would be handy, but would require some kind of central object model.
Another idea was that it would be handy to have a Python-style toplevel interpreter. For example, if I set up shred playing a loop, a bass-drum, I might want to suddenly change the interval between hits, and then almost immediately change it back again. For example, to do a rush at the end of a quad of bars. Creating a new text file to set the interval one way and then another text file to set it the other way and then doing replace operations on the VM is annoying. Chuck usefully allows you to access the VM from within code, so you could set up a script which automates this, but say you don’t want to automate it, say you want to play it by ear (literally). With a toplevel you could define library code so that these changes require only single line instructions and then play them much more easily.
Maybe what Chuck really needs is someway to associate functions and code with time ranges. Some way of saying, in 8 bars time, run this bit of code. But that would still be limiting. I find performing to be really self-referential, I play around until I hit something I like, then I repeat that. Chuck needs to allow much easier and more rapid prototyping of musical ideas. Sadly, I’m not sure that code is the best interface for this. When playing with Live I very much appreciate being able to latch on to a control with my mouse and then jam with it. And also be able to hit keys to play samples at the same time. Even the most deft master of multiple terminals couldn’t do as well as that. But then maybe this is a style issue. Maybe I jam in real-time because I’m not actually a good enough musician to get into the deeper stuff and I’m limited to making funny noises on top.