Why would a developer stop supporting an OS that’s only two years old? I imagine a few users (9% of you, as of this post) might ask that question when the next major TC updates drop support for iOS 7. I thought I’d put this out there for anyone who’s interested.
Some of the upcoming features in TC-11/Data require the latest tools Apple provides for rotation and layout. Unbelievably, Apple has rewritten the rotation system in every update since iOS 6, and with multitasking in iOS 9, the changes can no longer be ignored.
The choice is: support two totally different layout systems, or drop support for the old. It’s worth mentioning that the vast majority of my development time goes into layout. The amount of time to develop for two different systems would slow feature adoption and new projects to a crawl.
Xcode 7 no longer supports an iOS 7 simulator, and I’m down to a handful of physical devices still running iOS 7. You cannot downgrade devices, so I would be completely blind to any bugs / errors that users would encounter.
So it’s not a choice I took lightly, but I hope that the flow of cool new features will be worth the decision for most of my users.
MIDI sync in the TC apps: it’s been on the update list for a long time, which should give you hope if it’s on your wish list, but the bad news is that it won’t be in TC-11 2.0 right away. Beside the technical challenges (Michael Tyson has a fantastic video about this), it’s worth taking a look at the history of the Sequencer in TC-11.
The Sequencer module in TC-11 was not conceived as a tempo matching, clocked partner to external apps. Remember the dark ages before Audiobus? Yeah me either. But TC-11 was created back when iOS apps were single screen, single focus experiences. The Sequencer was never meant to communicate externally, and was actually based on the Arp 2500 Module 1027 Clocked Sequential Control:
I originally put the Sequencer in to generate three things: discrete controller data, note value / key filter control, and rhythmic data. The rate control was to be smooth (like the Arp), and it could be controlled globally or individually for each touch. So while it’s fully featured by itself, playing with others is something it’s not designed to do.
The silver lining is: MIDI sync will be a fantastic feature when it drops. It’ll just be a bit more time…
TC-11 1.x will soon be the ‘legacy’ version of the app, and from the inside it may as well be a different app entirely. I was trying to think of a single file that carried over into 2.0 untouched, and I don’t think there is one.
No where has this been more on my mind than the synth graph. TC-11 2.0 uses an entirely new connection system. The oscillators are higher resolution than before, there are new parameters for many of the audio objects, and the AHDSR, LFO, and Sequencers have been moved out of Pd and into Objective-C. Key filtering is out of the Sequencer and into the Oscillators / Filters (where they should have gone in the first place). The patch files themselves were completely re-written, and require a full conversion process to be updated for 2.0:
Ensuring the new app works and sounds the same as 1.x is essential. I’m sure some users out there have their one favorite user patch, and if it doesn’t sound the same in 2.0, that’s a critical failure. And until recently, the 1.x patches sounded completely different. Conversion wasn’t working. Sleep was lost.
After lots of code spelunking and reverse engineering (is it reverse engineering if you wrote the app in the first place?) the patches are sounding great – just like 1.x. Of course, they also have tons of new stuff to play with, which makes this whole thing worth it. I’m just glad to be mostly out of the woods.
The next update to TC-Data is done, and that means turning my attention back to TC-11. It looks like I’ll be pushing out a quick update just for 1.8.x Audiobus support for iOS 8. As a palate cleanser (post-testing monotony), I’m playing with camera brightness. Here are some thoughts about bringing a new sensor into an app.
The fun is definitely front-loaded. Research the video framework, load up a camera in a test app, and get some images to appear on screen. What part of the video are we going to use as a controller? I think there are two good candidates: brightness and motion.
I’m familiar with brightness calculating from Processing (all the way back to Scanner I). The challenge with iOS is, frankly, finding the least gunky way to get the data needed. So I Google some examples of brightness following and found one that works.
I start with low quality video to save on processing power, since we’re not going to see images from the camera. But that’s a bust because the frame rate is slow, and I do care about that.
Brightness following is go!
A quick look at Instruments shows I’m chugging as I create and destroy image data just to get the brightness. So all that code gets chucked. As it turns out (as some photographers may have known all along) that there is a BrightnessValue entry in the camera’s EXIF dictionary. Free data! I’ll take it when I can.
Like I mentioned, the fun is all at the beginning. Dropping this into the TC apps will take a lot of testing. Brightness values depend on the camera settings, and this should be able to react consistently in both high and low-light conditions. The stream has to start / stop at the correct times in the app lifecycle, the controller drawing needs to stick to the side of the screen that has the camera, etc.. But it’ll be there soon.
When TC-11 was first released, it didn’t have the floating touch preview view in the Patch area. That was the result of a common pitfall: what is obvious to you is not necessarily obvious to your users. I knew how adjusting parameters affected the patch, so there was little trial and error beyond flipping back and forth to the Performance view.
When users made the (in hindsight, obvious) request for a way to preview patches as they programmed them, the floating preview view was born. To implement it meant squeezing all of the functionality of the Performance view into the small subview, but there was one nagging issue: drawing.
The visuals drawn on the Performance view were updated in the same methods as the touch controller generation, i.e., the drawing code and touch code were intertwined. The correct thing to do was to decouple the two on the Performance screen. The easy thing to do was to duplicate the touch control code for the preview view and be done with it. Path of least resistance, ahoy!
Now a few years later I finally corrected my mistake. The touch control code and OpenGL code have been separated, and the main view and preview view are driven by the same code. This brings some nice portability: the touch control view can be dropped into any project and quickly connected. Even more interesting is that there could be two or more copies of the touch control view on the same screen. Split view mode?
I was really excited to present my research on TC-Data at the ICMC-SMC conference in Athens, Greece. The conference was packed with music – five concerts a day. There was a really nice acousmatic setup in the main concert hall at the Onassis Cultural Center.
The keynote speakers were Jean-Claude Risset, John Chowning, Gerard Assayag, Peter Nelson, and Curtis Roads. It was amazing to hear them all over the course of the week. Chowning’s talk was particularly engaging: tracing his origins into computer music and the beginnings of all that we work with today.
TC-Data is the latest iteration of my touch control research. The goal was to unlock the touch screen controllers for generalized use.
Although based on TC-11, the interface and backend of TC-Data was completely rewritten to be as lightweight as possible. This meant redesigning the engine that ran the controller generation so that the iPad could be free to use its processor for the apps that need it – the synths that TC-Data can target.
I exhibited TC-Data at the ICMC-SMC conference in Athens, and was really happy to meet people that were excited about the project. So far it’s been a great launch!
On July 13th Minor Vices got together to record Misdemeanors, a structured improvisation by Chris Burns. The 26 short movements of the piece set rules for the ensemble to follow while they play. In addition to being great music, excellent performances, and absolute fun, we got to break in new recording equipment and fully use a space we’ve occupied for years.
Minor Vices is: Chris Burns, David Collins, Adam Murphy, Trevor Saint, Kevin Schlei, Amanda Schoofs, and Seth Warren-Crow. Steve Schlei was the recording engineer.
WWDC has been incredible so far. Even waiting in line (not uncommon) means meeting tons of enthusiastic and friendly developers. The sessions have been fantastic. The upcoming API improvements are going to make things much easier, and there are some incredible new technologies I’ll be dropping into my apps.
I’ve met with Apple engineers in Core Audio, Cocoa Touch, and UI Designers to chat about my apps and approaches to improving interface development. It’s been great to ask the API designers how my interface approach can be better organized. I’ve learned a lot and there is still more to go!