Next Steps


Our beat generation is mostly independent of chords and melodies. However, we believe there is a lot that can be done to make our compositions more rhythmically interesting by ensuring the melodic rhythm complements the beat rhythm well. To do this, one possibility could be to analyze the relationship between where the notes of the melody and beats lie, and to see if there are any patterns that emerge.


There is a lot of potential future work in terms of chords and chord progressions. In CUTHBERT, our chord dataset comes from many different styles, but these chords are sometimes mixed together. If we gathered more data in each of the styles, it would be interesting for CUTHBERT to compose music in a user-specified style. It would also be interesting to learn more sophistic ated models for chord generation in order to create our own chords that weren't present in the original datasets.


Our melody generation currently only relies on the last generated note, which may lead the pattern astray. In future iterations, we hope to incorporate a longer lookback. We also currently don't use any structure in our melody outside of just tying loops together, using an ABA pattern or structure might be helpful. Another thing we wanted to try for melody generation was incorporating the TheoryTab dataset of thousands of melodies from popular songs. This would have bolstered our capacity to generate melodies significantly.

The chord similarity algorithm also could have been adjusted to generate better loops. The final melodies sounded fairly decent, but were still a little dissonant at times. This is likely due to the generated chords not matching the chords in the chord-to-sequence data structure. Thus, we might have tried a different similarity measure such as interval equivalence.


One artistic note in writing music is the use of repetition in pieces. As a passage plays an audience will slowly pick up on the tune and will here it repeated every so often and builds there ways and give the audience a surprise and keep them on their tones. An example of this would be a ABABC music repetition. The current framework doesn't support this yet, but here would be the steps to take to add this feature to CUTHBERT.

Instead of preloading more datasets and trying to analyze them for music repetition (this could be hard because some parts might be repeating while others or not, subtle differences between could make a metric for repetition difficult to define) we would use a on the fly approach to computing these repetitions. The previous models work by grabbing the frequencies of transitions from different states and used to that generate the next in a melody based off the resulting probabilities. This data was obtained the previously found and cleaned datasets. What we would different here is not generate any starting set of probabilities in markov chain using datasets (YES, we start with no data). The dataset we use to build the markov chain will be the passage itself as it plays.

As a passage starts the Markov chains frequency table is initially empty, so CUTHBERT decides to make a 4 bar piece, once we have that we have a new state for the model (we will call this state A), we now have the option of repeating A or making something new, B. We would choose either randomly and then we record the transition and update our frequency table. If we decided on A then the resulting passage would have an AA pattern and the passage will then have a tendency to repeat A's indefinitely. Or, if B was chosen we would have an AB pattern and for every instance we play A we would a have a higher tendency to B right after it, possibly yielding an ABAB pattern for the passage.