|
1.5.2. Universal drum map
Samplers are mainly controlled by keyboard devices, or computers representing played keyboard notes. Before playing different samples stored in a sampler, samples must be assigned to particular positions on the keyboard. If drum samples within a break beat could be recognised, drums of a specific type (snare drums, high hats etc.) could always be automatically placed in the same positions on the keyboard. (A group of samples positioned on a keyboard is often referred to as a key group.) Thus key groups from different break beats and indeed any sampled rhythm could be completely interchangeable with each other.
Where these drum types will be positioned on the keyboard will be referred to as the Universal drum map.
1.5.3. Problems
An existing manual method of achieving the above task would be to:- feed a number of break beats into an application such as Recycle, manually identify every individual beat as a particular piece of a drum kit, assign all similar drum types to the same MIDI notes and then to change all the MIDI rhythm files to play the altered MIDI notes. This would be an incredibly time consuming task even with the order of around five break beats.
The whole process could be easily automated if the application had a way of identifying drum kits sounds within break beats. It is this problem that the project aims to tackle.
1.6. Preview of chapters
Chapter 2, Background looks at the nature of the problem in hand and suggests ways this can be tackled. Chapter 3, Description of Methods, looks at the methods that have been decided upon. This includes a description of the input data, how the FFTs will be used, the noise reduction algorithm, a detailed look at the types of neural network to be used and post classification algorithms. Chapter 4, Requirements, describes in plain English the requirements of the implementation software. Chapter 5, Design and Implementation, is a detailed look at how the requirements have been implemented and includes definitions of major points of functionality. Chapter 6, Analysis of Results, presents the results together with explanations of why certain observations were made. Chapter 7, Discussion and Conclusions, talks about the successes and failures of the project and suggest ways in which the existing implementation could be improved. Chapter 8, Further Developments, discusses how this project could be integrated with the proposed application and also ways of improving recognition using recognition support algorithms.
|
|