Give the Drummer Some More: Advances in Breakbeat Analysis/Synthesis Jason Hockman >> Check, check, check, check, check! >> Test, test. Just let everyone know, we'll start in about 5 minutes... 5 minutes... >> Okay! How is everybody doing? Cool. Lunch was good. I want to really, you know, to thank the first two speakers we had today, M and Deirdre for killer, awesome talks so far, right? [ Applause ] Now we're gonna start our next set. And this one is a little special for me. This person right here, Jason Hockman, I've known for many years. We went to school together. At New York University. And actually Jason is the one responsible for me even wanting to do computer science. He told me to take a class that was probably a bad idea at the time. And I took a class and it kind of changed my life. You know, nothing too much. But Jason and I were musicians -- oar people interested in music and audio first before doing other stuff and getting into other stuff that we've done over time. And Jason has just been awesome. You know, I knew him not only as a great person and a great researcher. But he's a great musician. Jason, actually, you can find him on Spotify, sound cloud, the stuff as well, great electronic musician. We wanted something really cool. He has been doing great work in his life, at Birmingham City University, the DMTLab. And when I wanted to get him for this talk, he gave a great talk at Ableton -- Ableton loop conference, talking about breakbeats and the history. We've got to do something like this and talk about the future and the lab and the research that's coming out in this sector. I thank so much of my life to Jason. Now he has to do to do a great talk because of that, I hope. And I want to, yeah, introduce Jason Hockman, everybody. JASON: Thank you. [ Applause ] I don't know where to go from there. But thank you very much, Zee, very much appreciated. Okay. So, hello, hello, my name is Jason Hockman. And this is give the drummer some... more. I'll be talking about advances in breakbeat analysis and synthesis. And as Zeeshan said, I'm not just -- I can't remember. I should have prepared that. I'm not just the blank... owner... I'm the user as well, right? Did you know that from -- what was it? Yeah, yeah. That's the one, Hair Club for Men. I'm not just the President, I'm also a user. Or whatever it is. Yeah. Anyway. I'm a musician. I make breakbeat music. I've been doing that for a number of years. Have a record label that I run. So, if you're interested in checking out any of the record label stuff, it's DMT, transmissions, shameless plug that I put in my slides. But I'm also a researcher at the Sound and Music Analysis Group. And that's at a DMTLab, digital media technology lab, in Birmingham City University. So, just a quick other famous plug for the research stuff. The stuff we work on is mostly music information retrieval. Audio signal processing stuff. A bunch of machine learning, AI for music stuff and things in that area. The lab is compromised of 11 researchers and Ph.D. students. Yeah. The stuff that I'll be talking about today is all outputs of the lab. So, there are three parts to the talk. First, I will talk about what breakbeats are, where they've come from and why they're important. Part two will be the analysis techniques that we've developed for tracking breakbeats in music. And then part three will be about drum synthesis and rhythm transformations that we have been working on recently. So, as a definition. Breakbeats are samples of percussive solos from funk or jazz recordings, typically from the 1960s to the 9180s. Used in loads of genres and subgenres, hip hop to jungle to drum bass, footwork and even David Bowie has used it in little dancer, Tiny Dancer. They're originally recorded from vinyl records. Segmented and manipulated in samplers. And then rearranged through the use of sequencers. They have a 4/4 time signature, typically. And it's heavy syncopation in the individual drums that reinforces the overall feeling of meter. So, it's all well and nice to talk about what breakbeats are. But without giving examples, you know, it's kind of hard to actually really understand them. So, in 1970, a track came out. James Brown's band. They recorded funky Drummer. It's not the first breakbeat, but it's quite a popular one. His drummer is playing a highly-infectious groove on his drumset. And James Brown thought it was so good, he called out to Clyde Stubblefield, give the drummer some -- that's where the name of the talk comes from. And he basically wanted to let Clyde Stubblefield solo. And he does this solo at the 5 minute 20 mark. And there's -- it's been turned down? No? Since before. Nope. >> Sorry... JASON: There it is. ¶ okay. Okay. And as soon as he's done, that's it. That was the break. When it was the percussion solo, that was it. So, the typical instruments that are in breakbeats are bass drums, snare drums, floor Toms, hi-hats, cymbals. But they can also include vocal shouts like the last one. Tambourines, bongos other sounds. Another well known one is Michael Viner's Incredible Bongo Band. This is Apache. ¶ Okay. So, the origins of breakbeats are in the rhythms of New Orleans street processional drumming. And they were adopted to the drum kit by Earl Palmer in the 1950s as the New Orleans beat or the street beat. And then in the 1960s, James Brown's drummers like Clyde Stubblefield and Clayton Fillyau introduced these syncopated patterns. Known as faultback grooves, 16th-note grooves with very little swing with 2 or 4 bar patterns that had accents on the 2 and the 4. If it wasn't on the 2 and the 4, maybe it was a snare moved an eighth note later or earlier. So, breakbeats were originally used by DJs in New York. Kool Herc used two turntables and a mixer to move between breakbeats on two records. He did that because he noticed people were dancing more energetically between the breakbeat sections. Later, Grand Master Flash incorporated beat matching to improve the seamless transition between different tracks that were being played. And later, musicians would use a tape deck and a turntable to record a break on to the tape deck, hit pause, rewind the -- the break on the turntable, and then release it. And hit pause again to record more of the break on. And keep doing that and doing that to make it longer and longer and longer. Then machines started to come out that were useful for chopping up and layering breakbeats with other sounds like an 808 boom for effect. And then fast forward to today, you have all sorts of breakbeats that are available for, you know, you just download huge, huge just collections of data where you have just loads of breaks. Some of them already chopped up with a MIDI file for usage within a sequencer. So, there's tons now for people to use. So, I did a small ethnographic survey of the different techniques that people employed when they were introducing breakbeats into their music. And among hardcore jungle and drum bass musicians, which is kind of my focus, resequencing came out on the top. So, other techniques that people have used were, you know, also widely popular. But resequencing was the most popular. This is basically when you chop up the break into individual segments and then rearrange them to create new rhythmic patterns that are of interest to you. Also of interest were pitch modification techniques. Scaling up the pitch or slowing it down, distorting the break. Reversing it, playing it backwards like -- what's that Beastie Boys tune? Multi-break layering. Where you take two breaks and layer them on top of each other, two or more, real. Multi-break alternating, going from one break to the other, switching back and forth, timestretching, reverberation, there were a host of others. But these were the most-often used. So, in order to actually introduce the -- the breaks into the music you need a few items. Especially back in 1992. This is the home studio of Andrew Wright who is a hardcore musician from Dunstable, England. Here you see he used a Commodore Amiga for a computer on which he did his sequencing. And then the Ensoniq EPS16+, this one is still knocking around today, by the way. He used that for sampling and sampling breakbeats specifically too. And there are some records there that he would have had breakbeats on that he would have sampled. So, the process basically looks like this. You have originally breakbeat on the bottom. This is one that might be two measures along. The white boxes on the bottom demonstrate those slices that are going to be reused in the new track. And the way ones are ones that are not used. So, here we see k1 is used in two different places. H1 is not used, the hat. The snare is used in different places, and so forth. The problem of which breaks to use and how you can use them comes in when you have other drums that sit on top of those drums. So, if you have a crash that's present on top of the kick, hat snare, hat, hat, and next kick, you can't really rearrange those too much because you'll have the decaying crash across those drums. As such, you may only -- if you wanted drums that didn't have the crash on it, you're only limited to the k3 and h4 there. Right? So, this plays into the selection process and the modifications that people can make when they're working on their music. So, this is the Amen break I'm going to play next. Just a show of hands. Does anyone -- does everybody know what this is? If you don't, you will -- you will as soon as you hear it. Has anyone ever seen Power Puff Girls? Okay. More people have seen Power Puff Girls, you know this, then. ¶ That's the amen. You don't have to say much more, break core, jungle, bass, yeah. I won't say any more about the Amen because you know what it is. Now, this is the breakage's remix of Equinox's Acid Rain which came out in 2004. ¶ It's heavy. ¶ So, we're missing a few items. We need a sub, at least two subs. Smoke machine. Flashing lights. [ Laughter ] But I'm telling you, it goes off when you hear that in the club, it just goes off. It's very, very intense. So, I'll now move to the analysis part. Stuff that we've developed for tracking breakbeats. And the motivation behind what we're doing here is we wanted to be able to track this resequencing. And the main reason that we want to do that is because we want to identify the trace of basically the change of original breakbeat from its original conception, how it was originally recorded, into all the different variations that you might have heard just there with like the Amen and so forth. So, perhaps you want to see how -- how certain rhythms were adopted. You know? When certain rhythm patterns came in and then became like the go-to usage of an Amen or the go-to usage of an Apache. Who used certain breakbeats within particular genres is of interest too. Because then you can start to see these pathways of -- it's not necessarily intellectual property. But like, you know, it's something that someone did that was infectious that became a whole new subgenre. So, it's very important to see that. If one wants to understand the complexities and details of these new genres rather than just the more basic -- I shouldn't say more basic -- but more basic theoretical stuff that doesn't really apply here. So, say, for example, you want to look at Baby Kane's Hello Darkness. If we did some kind of automated analysis of the track, we could see that it used the assembly line break by the Commodores 20 years earlier. And then if you analyze this and other tracks that used this breakbeat, you could then see the -- the different directions in which the rhythmic creations had taken -- taken place. Within the context of Baby Kane and the subgenre. It's somewhat similar to automates the connections that you would generate from the WhoSampled.com resource if you're familiar with that. So, to understand these resequencing techniques that people would use, Matt Davies and I worked on this workflow. Essentially to identify the breakbeat arrangement that was being used with a particular breakbeat that was found. And so, in order to do this, there's a few different sub-tasks that are associated with it. There's drum separation, which is a source separation task. Drum transcription. Then there's breakbeat classification, downbeat detection and this resequencing analysis. In the last 7 years at BCU, we've focused mainly on the drum separation, breakbeat classification, this resequencing analysis, and drum transcription. So, in this next little bit I'll talk about the -- the finding breakbeats within tracks and drum transcription. So, during his Ph.D. at BCU, Carl Southhall developed a state-of-the-art drum transcription algorithm and made several improvements along the way. So, prior to this work, the state-of-the-art was non-negative matrix factorization. And it essentially would reduce a spectrogram into a set of basis functions that were associated with the spectra of particular drums. And activation functions that were associated with where those drums were present within a timeline. Now one of the difficult things with drum transcription is tracking the snare drum. Mainly because the snare drum has so many didn't playing styles that is associated with it. So, you can hit a snare drum, you know, with a Mutt Lange big hit. That was just that kind of -- doosh! Kind of hit. You can play ghost notes. Hit it on the side as a rim shot. You can use didn't tools to activate it. You could adjust the snare time to that they're tense or soft and then this is all before talking about didn't -- different size drums and different makes and models and stuff that people put on to drums to soften it like moon gel and all that kind of stuff. So, because of that, it's a difficult problem. We knew at the time that neural networks were highly useful in other frame-based detection tasks such as onset detection. And we also knew that they're very good at capturing complex constellation of features and then associated those with a given class, all right? That's what they're good for. They're good for tons of stuff. But anyway. So, we applied them in a frame-based detection of drum instrument onsets where we have individual neural networks for each instrument under analysis. And the output of each of these is going to be activation functions which are then peak-picked for the presence of drums within that timeline. So, we -- to utilize information for previous and future time steps, we chose to use recurrent neural networks. These are particularly good at observing changes over time through connections between timesteps. So, the timesteps can be associated with a single direction. So, like the white connections there would be the forward RNN connections. Or it could be backwards connections which were including information from the future into the present calculation. Which would be those blue and dashed lines. So, in a comparison to NMF, the bi-directional RNN achieved higher performance for drum solos, and also significantly higher performance for snare drums within multi-instrument mixers including guitars, bass, and also vocals. So, on the top, it's kind of a difficult spectrogram to see in this condition. But you can -- you'll have to believe me -- that there's spectral content in the spectrogram within those -- within those green rectangles. So, it's a mixture of kicks, hats, and snares in that top subplot. And NMF gets all the -- gets the true positives, lovely, right? It's found all the true positives. But it's also picking up information from the hats. So, it has false positives found at a lower level. But still finding those. Now, the neural network one, on the bottom, actually gets the -- the true positives. But none of the hat information present. One thing to notice, though, is that there are -- there is a discrepancy between the output of the activation function -- well, the activation function -- and the idealized output which is what we'd want at 1, right? Now, it may not seem too -- like too much, but, you know, within the context much one of these problems, it can be quite variable. And it can substantially affect performance after peak picking. Or during peak picking, rather. And so, our next task was to attempt to improve this by either looking at the peak picking process directly, or looking at activation function generation that would make peak picking easier. So, peak picking involves three timesteps in order to perform it. So, you have two low points and one high point in between, right? So, it's a peak. And our first attempt was to train a machine learning algorithm for peak picking. And it helped in some contexts. But really, it was just ended up being like a band aid solution. Because we couldn't -- we couldn't plan for all context. And there was always going to be something else that was missed out and it was not the best way to do it. So, instead, we considered the -- the training process for the -- for the RNN that used a single timestep rather than three timesteps. So, a single time step is used in cross entropy or mean squared error. And the peak picking process uses three timesteps. So, Carl developed new cross functions for training neural networks that used three timesteps. So, adding to standard cross entropy. That's the one on the left. That measures just the present discrepancy. We added a peak salience measure to the -- sorry. We added a peak salience measure to introduce cross entropy from past and future timesteps. And there are a couple different flavors of this that we tried. And the multi-difference peak salience measure was the one that performed the best. This one measures the absolute differences between sequential timesteps between the target and the output. So, in evaluating these loss functions within the context of two different tasks, onset detection and drum transcription, we found that these loss functions improved the results across all the systems that we were -- we tested for both of these tasks. So, then another issue within the context of neural network-based drum transcription is the relatively small amount of training data that exists. So, when there's not enough training data, these systems fail to generalize. And unfortunately, within the context of datasets for this task, we're never gonna have datasets that have all the interactions that we want between the different instruments. So, for example, you're not gonna get someone singing while there's a snare and a hat -- open hat, you know, with a bass playing or something like this. And so, because of that, you know, it's very difficult to get success within the context of -- out in the wild. So, to solve this, Carl incorporated the thematic concept of adversarial networks into an automated sequencing paradigm. So, here we have the same drum transcriber as I've explained thus far that outputs activation functions that are peak-picked and so forth. But the big difference is that we have a player network which is here. And is involved in a two-stage iterative training process with the transcriber where it will take in existing training data and samples from a library and manipulate them and add them together to create manipulated data that the transcriber has not yet seen. And when it is -- when the transcriber is no longer improving, that's when you can use it out in the wild. Or perform our tests that we do and papers and stuff. Yeah. So, having a little closer look at the transformations. The existing data is transformed in frequency to kind of model changes in -- in drum performance or in the sizes of particular snare drums and this type of thing. And isolated samples also undergo a similar process. They're then added together along with these generated positions for the drums. So, all these things are determined by the player network. And, yep. It improved performance. So, drum transcription will tell us where the drums are within the context of audio waveform. But what we don't know is where the breakbeat sections exist within the track. So, in order to do that, we apply the task of automated instrumentation role classification. Or loop activation transcription. So, this is work that was led by a Ph.D. student at BCU, Jake Drysdale, along with Antonio Ramirez in Barcelona. So, to give you an idea of what the task entails. On the top we have a spectrogram that's created -- it's basically an electronic music track that's built with five loop layers. And on the bottom, we have corresponding activations that are associated with that -- that track. So, the activations could be chords, melody, FX, bass, and percussion. And so, the idea is to perform some multi-label classification. [Audio stopped] -- Okay. So, the system builds on recent literature in music autotagging and sound event classification. And as such, we use convolutional neural network to make multi-label predictions for spectrograms that are generated from audio loops. The frontend uses several vertical filters and they're of all different sizes. We used specialized sizes that were specific to capturing wide spectral shapes like bass and chords. And shallow spectral shapes like drums because they're shorter. So, this is followed by several 2D convolution layers with autopooling at the end to summarize information that's been captured from the network. And here is an example of the estimated loop activation structure within an actual track. So, this is Joyspark by Om Unit using the proposed model. And on the top we have the spectrogram. On the bottom we have a four-bar estimation of the role presence for those same classes I just described. ¶ Okay. So, the same system could be used for breakbeat prediction. So, in this particular case, we have the Amen as we heard before, and the ground truth that's labeled, the annotated kind of thing. And the breakbeat is the output of the system. You can guess where it is when I play it. ¶ [Drums and horns] There. [Drums only] And then that. So, moving on to drum synthesis and rhythm transformations. The motivation for doing this work is the tools and techniques for breakbeat manipulation and editing and so forth are for the most part not advanced much since the early 1990s. This basically leaves musicians to select breakbeats based on this hidden transformation potential that's based to things like this horizontal and vertical overlap of sounds and syncopation and other aspects of the breakbeat that are unchangeable in that way. So, we've therefore built systems that allow for drum synthesis to kind of navigate around this issue with drum timbre, to adjust drum timbre, and rhythm transformations to allow for variability within the patterns. So, produces work in digital audio workstations such as Ableton Live and Logic Pro to make their music. And provides users with a variety of tools and techniques that they can use for modifying breakbeats. But finding the right drum sound can sometimes be difficult and a bit time consuming. Here's a video Jake put together to demonstrate the problem. Okay. So, producers will generally in a typical production scenario layer individual segments of breakbeats with other sounds for more present within a mix. As you could imagine, when you're trying to do that, when you're trying to find different drums, then you're EQing them, mixing them with compression, so forth. It's very difficult to maintain the motivation that you had when you're actually creating a piece of music. So, this work we present a system that's used to generate and fine-tune drum sounds that are based on a music producer's own personal collection of sounds. Unlike traditional drum synthesis, neural drum synthesis exploits deep learning to generate samples with the same or similar statistical properties as training data. So, this is a work that is led by Jake Drysdale and Maciej Tomczak. And -- yep. Here we have a user's personal collection of labeled drum sounds organized in a hierarchy of sub-folders, all right? In this work, we use a generative adversarial network that operates directly on waveforms and used to learn mapping between the collection of drum sounds and the low dimensional latent space. The network learns to differentiate between real and generated waveforms, the generator network will learn to synthesize wave forms that are trying to trick the discriminator into predicting they're from the training data set. The input to G is a latent vector, sampled from a prior distribution at a much lower dimensionality to the original data that allows for high-level control over the general possibilities. And the system was built on wave GAM. But includes the target for several drum classes. The conditioning in DNG is through an integer value, entered through an embedding layer to help remove overlap between the different classes. So, if you can imagine that like a snare drum and a hi-hat had some kind of timbre overlap in certain cases. What we don't want is someone generating sounds expecting a snare drum and move through the latent space and then having it turn into a hi-hat or kick drum or something like that. And then back to a snare. So, using this -- this conditioning helps to remove that overlap. So, in G, the latent vectors and embedding layer are then fed through a dense layer. And reshaped and then concatenated and they go through a series of these upsampling blocks that take the -- the smaller vector and then upsample it to the point where it's the size of the drum sample that we're generating. And then in D, the process is reversed. Where we have a waveform and a condition. And those are added into downsampling blocks that downsample to the point at which the discriminator will determine whether or not the waveform is real or fake. So, given a latent verdict and a condition, the generator learns and improves over time. At first, it's not very good. And then over time it learns to make things that sound like snare drums. Okay? And then once training is complete, the generator output can be explored by selected condition and traversing the latent space. Okay? So, we have a few examples here. I should note that the audio quality isn't optimal at this point because the model was trained on Jake's home PC. The quality could be improved using a more powerful machine, though. So, we have a kick A. And then kick B. [Kick] And then if we do interpolation from kick A to kick B. [Interpolation of kicks] I think the snare is -- there's quite a lot of reverb in this room. But might be able to see it. So, here's a snare A. And snare B. Okay. You can hear those a little more clearly. And then interpolation. [Interpolation of snare A and B] and cymbal A, and B, and cymbal B, and interpolation. [Interpolation] All right. So, that's a small track that Jake made -- or track? Small beat that Jake made to demonstrate the effectiveness of using slight changes in modifications in the positions within the latent space so you could get more humanized drum sounds out of it, right? So, one of the problems with breakbeat, using breakbeats, especially when you don't have lots of slices to use is that you event up using the same slice again and again. And it becomes quite repeatable and you get listener fatigue. This is one way to avoid that. So, what are the difficulties with many of these neural drum synthesis models? Exploring this latent space is actually quite complex. It a latent space is 128 dimensions, then user would have to provide 128 different parameters every time that they want to generate a drum. This is a lot different than, say, you know, like a early '90s drum synth that might have five parameters like snap. Like a pitch envelope. Original pitch and some other more basic ones. So, to improve this, we modified the system I've just explained to incorporate certain modifications from style again. So, instead of providing the latent vector at the beginning as we did in wave GNN, we provide the vector as intermediate vectors from a mapping network at the upsampling blocks within the generator. And we also use PCA to derive a set of usable controls for timbre variation. So, just a few snare drums from this model. [Snare drums] Okay. So, another area we're working within is rhythm transformations. This involves taking the -- a source recording and transforming it so that it takes on the rhythmic properties of a target recording. So, on the top, we have a source recording that made up of six different... six different note events. That has a particular rhythm. Labeled 1 through 6. But on the bottom, we have the target recording that has 7 note events labeled 7 through 13 in a different rhythm. So, these rhythm transformations can take three different forms. The first form is where we do segment reordering. Where there's no time stretching and we're just taking the drums and modifying the order in which they appear. So, this one was originally here. This slice 2. And now it appears at the beginning. Then 5, now appears here. Because it matches the layout of the drums in the target. A different type would be to preserve the segment order. And perform time stretching. So, it still has the same flow of events. One, two, three, four, five, six in a row. But we time stretch each slice in accordance to the target recording. And then the third type involves segments being reordered and performing time stretching. So, this is the one that would take on the new order based on the target recording. And also, time stretch them based on this target recording. So, way back in 2008 I worked on a product with members of the Queen Mary's and NYU's research communities for rhythm transformation based on this second form here. That preserved the segment order and time stretching. So, it was achieved through the completion of several different subtasks. One which was a metrical analysis for beat and bar boundaries. Once that was completed, we would cluster the bars to get a predominant rhythm pattern associated with the -- the source and the target. We would then use that for rhythm pattern matching. So, we would do a beat-by-beat rhythm pattern matching where we would consider all the events that would occur within a beat and try to identify a match between those. So, there were -- there is -- are two events in this first beat here in pattern A. And four events here in pattern B. So, we would find the lowest amplitude events and remove those from the list. And then combine them together. And then we would do this time stretching so it would match. So, the source might sound something like this... ¶ [Rockin' beat with synth over it] Okay. And then it might have a target like this... [Chord progression -- with ooo yeah!] Okay. And an output that would sound something like this... [Beat -- added on drums -- chord progression -- singing added] So, it's -- I think it works really well in certain circumstances. But I think it's -- I'd be lying if I said that I didn't cherry pick that. [ Laughter ] Right. So, we found that identifying segment boundaries was often found to be a source of artifacts or error. And in addition to that, any errors in the metrical analysis, which there were quite a few would propagate through the successive stages and cause chaos and it would sound terrible. So, as an alternative to that, the utilization of end-to-end machine learning can allow for a continuous transformation and alleviate the need for discreetized note selection or pattern matching stages. And that's exactly what Maciej Tomczak had in mind when he led this collaboration with AI ST Japan. And he -- here he tackles the task of automated transformation of timbre and rhythm in percussive audio in a continuous fashion using adversarial audio encoders. So, the system is achieved through adversarial autoencoder where it jointly learns drum timbres and rhythm patterns. And the mixture is a Gaussian mixture for different training styles present in the training data. The generator's condition for mixing timbre and properties through adversarial training. This is useful for both exploration and interpolation of latent space. So, interpolation between different rhythmic-timbral latent is through at zero, the source pattern, at 1, the pattern. Here's an interpolation between two different rhythms. The sound quality here is, again, something that is a slight issue. It's not slight. It's an issue. And it comes from the -- the phase reconstruction that was used during this process. ¶ [Driving drum beat] So, here we get a nice continuous graduation based on the different settings for alpha. So, to conclude. We worked on various algorithms for -- and tools for analysis and synthesis of breakbeats. We've created systems that are capable of learning playing styles of certain instruments under analysis within drum transcription. The most notable one of these is the snare drum and the improvements thereof. We've developed systems for finding breakbeat within full music tracks and generative systems for music producers that are capable of tamper morphing between samples within their collections. And rhythm transformations that are possible without error-prone subtasks. So, in the future, it would be really great to continue on this pathway and hopefully get to the point where we can look at computational influence that's related to the rhythm patterns, musicians and subgenres that we have been looking at. Obviously, we want to improve the fidelity of the generation models themselves, real-time systems for generation and transformation. As with most people in this field, latent space disentanglement is what we want for more interpretable controls. And longer generations would be great too. So, if we could generate full breakbeats rather than, say, shorter segments. A lot of that is an issue because of the size of the systems that we're using for this. But I digress. Thank you very much. [ Applause ] ZEESHAN: Thank you, Jason. Very awesome, very awesome. Cool. He'll be around -- you're around tomorrow. But you're around today to talk more. This is really awesome, really cool stuff. And I know the other room heard all of the sounds. [ Laughter ] JASON: Good. >> Cool. A 5-minute break. Just shorten it up a little bit for the next talk. Take a 5-minute break and then we'll reconvene for Santosh's talk.