Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Cherla, S.; Weyde, T.; Garcez, A.; Pearce, M. (2013)
Publisher: International Society for Music Information Retrieval
Languages: English
Types: Part of book or chapter of book
Subjects: QA75, M
The analysis of sequences is important for extracting information from music owing to its fundamentally temporal nature. In this paper, we present a distributed model based on the Restricted Boltzmann Machine (RBM) for melodic sequences. The model is similar to a previous successful neural network model for natural language [2]. It is first trained to predict the next pitch in a given pitch sequence, and then extended to also make use of information in sequences of note-durations in monophonic melodies on the same task. In doing so, we also propose an efficient way of representing this additional information that takes advantage of the RBM’s structure. In our evaluation, this RBM-based prediction model performs slightly better than previously evaluated n-gram models in most cases. Results on a corpus of chorale and folk melodies showed that it is able to make use of information present in longer contexts more effectively than n-gram models, while scaling linearly in the number of free parameters required.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] Charles Ames. The Markov Process as a Compositional Model: A Survey and Tutorial. Leonardo, 22(2):175-187, 1989.
    • [2] Yoshua Bengio, Re´jean Ducharme, Pascal Vincent, and Christian Jauvin. A Neural Probabilistic Language Model. Journal of Machine Learning Research, 3:1137-1155, 2003.
    • [3] Greg Bickerman, Sam Bosley, Peter Swire, and Robert Keller. Learning to Create Jazz Melodies using Deep Belief Nets. In International Conference On Computational Creativity, 2010.
    • [4] John Biles. Genjam: A genetic algorithm for generating jazz solos. In Proceedings of the International Computer Music Conference, pages 131-131, 1994.
    • [5] Ronan Collobert, Jason Weston, Le´on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493-2537, 2011.
    • [6] Darrell Conklin. Multiple viewpoint systems for music classification. Journal of New Music Research, 42(1):19-26, 2013.
    • [7] Darrell Conklin and Ian H Witten. Multiple viewpoint systems for music prediction. Journal of New Music Research, 24(1):51-73, 1995.
    • [8] David Cope. Experiments in musical intelligence, volume 12. AR Editions Madison, WI, 1996.
    • [9] Tuomas Eerola and Petri Toiviainen. MIR in Matlab: The Midi Toolbox. In Proceedings of the International Conference on Music Information Retrieval, pages 22- 27. Universitat Pompeu Fabra Barcelona, 2004.
    • [10] Joachim Ganseman, Paul Scheunders, Gautham J Mysore, and Jonathan S Abel. Evaluation of a Scoreinformed Source Separation System. In International Society for Music Information Retrieval Conference (ISMIR), pages 219-224, 2010.
    • [11] Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771-1800, 2002.
    • [12] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A Fast Learning Algorithm for Deep Belief Nets. Neural Computation, 18:1527-1554, 2006.
    • [13] Robert M Keller and David R Morrison. A Grammatical Approach to Automatic Improvisation. In Sound and Music Computing Conference, pages 11-13, 2007.
    • [14] Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted Boltzmann machines. In International Conference on Machine Learning (ICML), pages 536-543. ACM Press, 2008.
    • [15] Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. A tutorial on energy-based learning. Predicting Structured Data, 2006.
    • [16] Christopher D Manning and Hinrich Schu¨ tze. Foundations of statistical natural language processing. MIT press, 1999.
    • [17] Andriy Mnih and Geoffrey E Hinton. A scalable hierarchical distributed language model. In Advances in neural information processing systems, pages 1081-1088, 2008.
    • [18] Michael C Mozer. Connectionist music composition based on melodic, stylistic and psychophysical constraints. Music and connectionism, pages 195-211, 1991.
    • [19] Francois Pachet. The continuator: Musical interaction with style. Journal of New Music Research, 32(3):333- 341, 2003.
    • [20] Marcus Pearce. The Construction and Evaluation of Statistical Models of Melodic Structure in Music Perception and Composition. PhD thesis, 2005.
    • [21] Marcus Pearce and Geraint Wiggins. Improved methods for statistical modelling of monophonic music. Journal of New Music Research, 33(4):367-385, 2004.
    • [22] Claude E. Shannon. A Mathematical Theory of Communication. The Bell System Techincal Journal, 27(July):379-423, 623-656, 1948. Reprinted in ACM SIGMOBILE Mobile Computing and Communications Review, 5(1):3-55, 2001.
    • [23] Paul Smolensky. Parallel distributed processing: explorations in the microstructure of cognition, vol. 1. chapter Information processing in dynamical systems: foundations of harmony theory, pages 194-281. MIT Press, Cambridge, MA, USA, 1986.
  • No related research data.
  • Discovered through pilot similarity algorithms. Send us your feedback.

Share - Bookmark

Download from

Cite this article