[​work in progress]
The following project was created with music students (mostly instrumentalists) who already had experience with sequencers or notation programs. The group was given the task of designing a "final for a fictitious ACTION scene" with a minimum of 30 seconds.
a) The result of the sample player:
The musical character, the "fat sound" and the selected instruments in the sample player (sound example 4) were positively evaluated. The group liked to be able to make music with so little effort. For collecting ideas, such programs are well suited. Users have fun from the beginning when "experimenting". Less positive evaluated was the mix, which was often too "spongy" and too "unrealistic" to the professional musicians.
b) The result after the transcription:
The result was rated as approach, since not all sounds could be "reconstructed". In addition, the spatial sound image was not comparable to the original. Despite these drawbacks, the result was that the current score sounded much more transparent than the original. On the notation screen, the sound of individual instruments could be individually selected, which made the result much more differentiated. It was generally less tried out, but rather the musical experience of the group was used.
There was agreement that higher-quality software tools could lead to more realistic results. Only the skillful handling of VST controllers and sound sets Expression Maps Articulation IDs) * gives the sound result its individuality. However, the discussion also revealed that the "personal taste" plays a role and that sound is "nothing absolute". )* Designation depends on the manufacturer |