Projects

Musimathics Lab is a research laboratory aimed at developing new technologies for music. In recent years, projects have been carried out in various research areas including, Algorithmic Music Composition, Music Styles Recognition, Computational Intelligence in Music, Music Gestures Recognition, HCI with Virtual Instruments, Music Visualization, and others.

Description

EvoComposer

EvoComposer is an evolutionary algorithm for the 4-voice harmonization problem: one of the 4 voices (which are bass, tenor, alto, and soprano) is given as input and the composer has to write the other 3 voices in order to have a complete 4-voice piece of music with a 4-note chord for each input note. Solving such a problem means finding appropriate chords to use for each input note and also finding a placement of the notes within each chord so that melodic concerns are addressed. Such a problem is known as the unfigured harmonization problem. EvoComposer uses a novel representation of the solutions in terms of chromosomes (that allows to handle both harmonic and nonharmonic tones), specialized operators (that exploit musical information to improve the quality of the produced individuals), and a novel hybrid multiobjective evaluation function (based on an original statistical analysis of a large corpus of Bach’s music). Moreover EvoComposer is the first evolutionary algorithm for this specific problem. EvoComposer is a multiobjective evolutionary algorithm, based on the well-known NSGA-II strategy, and takes into consideration two objectives: the harmonic objective, that is finding appropriate chords, and the melodic objective, that is finding appropriate melodic lines. The composing process is totally automatic, without any human intervention.

R. De Prisco, G. Zaccagnino, R. Zaccagnino.
EvoComposer: An Evolutionary Algorithm for 4 voice Music Compositions.
Evolutionary Computation Journal, vol. 28, p. 489-530 (2020), ISSN: 1063-6560, doi: 10.1162/evco a 00265.

GymIntelligence

Ambient Intelligence (AmI) is an interdisciplinary research area of ICT which has evolved since the 90s, taking great advantage from the advent of the Internet of Things (IoT). AmI creates, by using Artificial Intelligence (AI), an intelligent ecosystem in which computers, sensors, lighting, music, personal devices, and distributed services, work together to improve the user experience through the support of natural and intuitive user interfaces. Nowadays, AmI is used in various contexts, e.g., for building smart homes and smart cities, providing healthcare, and creating an adequate atmosphere in retail and public environments. Gym Intelligence is an AmI system able to provide adequate music atmosphere, according to the users' physical effort during the training. The music is taken from Spotify and is classified according to some music features, as provided by Spotify itself. The system is based on a multi-agent computational intelligence model built on two main components: (i) machine learning methods that forecast appropriate values for the Spotify music features, and (ii) a multiobjective dynamic genetic algorithm that selects a specific Spotify music track, according to such values. Gym Intelligence is built by sensing the ambient with a minimal, low-cost, and non-intrusive set of sensors, and it has been designed considering the outcome of a preliminary analysis in real gyms, involving real users. We have considered well-known regression methods and we have validated them using a collected data (i)about the users' physical effort, through the sensors, and (ii) about the users' music preferences, through an Android app that the users have used during the training.

Roberto De Prisco, Alfonso Guarino, Nicola Lettieri, Delfina Malandrino, Rocco Zaccagnino.
Providing music service in Ambient Intelligence: experiments with gym athletes.
Expert Systems with Applications, Elsevier. ISSN: 0957-4174. Minor revision.

Musica Parlata (Spoken Music)

Musica Parlata is a software tool designed for teaching music to blind people. Musica Parlata is an idea of Prof. Alfredo Capozzi (http://www.musicaparlata.it/ - the webpage is in italian). The Musimathics Laboratory has been involved into this project in order to re-write the original software with the goal of improving usability. The Musica Parlata Player allows blind people to hear the names of the notes and of the chords of the song that they are studying. Moreover it is possible to slow down or speed up execution and set loops for practice purposes. In the download page you can download the software and some demo video.

Alfredo Capozzi, Roberto De Prisco, Michele Nasti, Rocco Zaccagnino.
Musica Parlata: a methodology to teach music to blind people.
ACM SIGACCESS Conference on Computers and Accessibility, Boulder, CO, USA, October 22 - 24, 2012. ACM 2012, ISBN 978-1-4503-1321-6, pp. 245-246.

MarcoSmiles

Today's technology is redefining the way individuals can work, communicate, share experiences, constructively debate, and actively participate to any aspect of the daily life, ranging from business to education, from political and intellectual to social, and so on. Enabling access to technology by any individual, reducing obstacles, avoiding discrimination, and making the overall experience easier and enjoyable is an important objective of both research and industry. Exploiting natural user interfaces, initially conceived for the game market, it is possible to enhance the traditional modalities of interaction when accessing to technology, build new forms of interactions by transporting users in a virtual dimension, but that fully reflects the reality, and finally, improve the overall perceived experience. The increasing popularity of these innovative interfaces involved their adoption in other fields, including Computer Music. MarcoSmiles is a system designed to allow individuals to perform music in a easy, innovative, and personalized way. The idea is to design new interaction modalities during music performances by using hands without the support of a real musical instrument. We exploited Artificial Neural Networks to customize the virtual musical instrument, to provide the information for the mapping of the hands configurations into musical notes and, finally, to train and test these configurations.

Roberto De Prisco, Delfina Malandrino, Gianluca Zaccagnino, Rocco Zaccagnino.
Natural User Interfaces to Support and Enhance Real-Time Music Performance.
AVI 2016: 204-211.

EvoBackMusic

Systems for real-time composition of background music respond to changes of the environment by generating music that matches the current state of the environment and/or of the user. EvoBackMusic is a multi-agent system that exploits a feed-forward neural network and a multi-objective genetic algorithm to produce background music. The neural network is trained to learn the preferences of the user and such preferences are exploited by the genetic algorithm to compose the music. The composition process takes into account a set of controllers that describe several aspects of the environment, like the dynamism of both the user and the context, other physical characteristics, and the emotional state of the user. Previous system mainly focus on the emotional aspect. EvoBackMusic has been implemented in Java using Encog and JFugue, and it can be integrated in real and virtual environments.

Roberto De Prisco, Delfina Malandrino, Gianluca Zaccagnino, Rocco Zaccagnino.
An Evolutionary Composer for Real-Time Background Music.
EvoMUSART 2016: 135-151.

Music Plagiarism

Plagiarism, i.e., copying the work of others and trying to pass it off as one own, is a debated topic in different fields. In particular, in music field, the plagiarism is a controversial and debated phenomenon that has to do with the huge amount of money that music is able to generate. However, the existing mechanisms for plagiarism detection mainly apply superficial and brute-force string matching techniques. Such well-known metrics, widely used to discover similarities in text documents, cannot work well in discovering similarities in music compositions. Despite the wide-spread belief that few notes in common between two songs is enough to decide whether a plagiarism exists, the analysis of similarities is a very complex process. We have proposed both text-based and fuzzy-based plagiarism detection tools.

Roberto De Prisco, Delfina Malandrino, Gianluca Zaccagnino, Rocco Zaccagnino.
Fuzzy vectorial-based similarity detection of music plagiarism.
FUZZ-IEEE 2017: 1-6.

Roberto De Prisco, Delfina Malandrino, Gianluca Zaccagnino, Rocco Zaccagnino.
A computational intelligence text-based detection system of music plagiarism.
ICSAI 2017: 519-524.

Roberto De Prisco, Antonio Esposito, Nicola Lettieri, Delfina Malandrino, Donato Pirozzi, Gianluca Zaccagnino, Rocco Zaccagnino.
Music Plagiarism at a Glance:Metrics of Similarity and Visualizations.
IV 2017: 410-415.

Roberto De Prisco, Nicola Lettieri, Delfina Malandrino, Donato Pirozzi, Gianluca Zaccagnino, Rocco Zaccagnino.
Visualization of Music Plagiarism: Analysis and Evaluation. IV 2016: 177-182.

VisualHarmony

Understanding the structure of music compositions requires an ability built over time, through the study of the music theory and the application of countless hours of practice. In particular for beginner learners, it can be a time-consuming and a tedious task due to the steep learning curve, especially for classical music. Composing such type of music requires the study of rules that are related to many structural aspects of music, such as melodic and mainly harmonic aspects. To overcome these difficulties, interdisciplinary techniques could be exploited to understand whether extra (visual) information, provided through a specific software tool, could be useful to improve learning in a quick and effective way. VisualHarmony is a tool that allows users to perform the harmonic analysis of music compositions by exploiting visual clues superimposed on the music scores. Since the harmonic analysis requires to identify similar tonalities and relevant degrees, the visualization approach proposed uses closest colors to represent similar tonalities and degrees.

Roberto De Prisco, Delfina Malandrino, Donato Pirozzi, Gianluca Zaccagnino, Rocco Zaccagnino.
Evaluation Study of Visualisations for Harmonic Analysis of 4-Part Music.
22nd International Conference Information Visualisation (IV 2018), Fisciano, Italy, July 10-13, 2018: 484-489.

Delfina Malandrino, Donato Pirozzi, Rocco Zaccagnino.
Learning the Harmonic Analysis: Is visualization an effective approach?.
Multimedia Tools and Applications, p. 1-32 (2019), ISSN: 1380-7501, doi: 10.1007/s11042-019-07879-5.

VisualMelody

Experienced musicians have the ability to understand the structural elements of music compositions. Such an ability is built over time through the study of music theory, the understanding of rules that guide the composition of music, and countless hours of practice. The learning process is hard, especially for classical music, where the rigidity of the music structures and styles requires great effort to understand, assimilate, and then master the learned notions. In particular, we focused our attention on a specific type of music compositions, namely, music in chorale style (four-voice music). Composing such type of music is often perceived as a difficult task because of the rules the composer has to adhere to. VisualMelody is a tool that can help people lacking a strong knowledge of music theory. It exploits graphic elements to draw the attention on the possible errors in the composition. It has been developed as an interactive system that employs the proposed visualization technique to facilitate the understanding of the structure of music compositions. The aim is to allow people to make four-voice music composition in a quick and effective way, that is, avoiding errors, as dictated by classical music theory rules.

Roberto De Prisco, Delfina Malandrino, Donato Pirozzi, Gianluca Zaccagnino, Rocco Zaccagnino.
Understanding the structure of music compositions: is visualization an effective approach?.
Information Visualization, vol. 16, p. 139-152 (2017), ISSN: 1473-8716, doi: 10.1177/1473871616655468.

SoMusic

In Music, the score has always been the main instrument used for the transcription, performance and composition. There exists several systems for the transcription and composition of music scores. Although these are widespread and evolved, they generally do not allow: (i) a collaborative mechanism that allows composers to spread their creations and to search for compositions made by other people, and a (ii) a communication mechanism that allows composers to become an active part of the composing process of other composers, by exchanging messages, sharing information, and so on. Generally, musicians take advantage of the most common means of communication on the web to implement these features, such as Facebook, Twitter, Instagram and Linkedin. SoMusic is a collaborative Social Platform that in addition to the main features provided by the most important social networks, allows to the musicians to collaborate, to simultaneously create a musical composition in a concurrent manner and to find other composers who have a similar composition style. Social collaboration is the key aspect to increase the musical quality productivity, where musicians participate in the discussions, learning in the web classrooms, co-create music compositions and share music ideas. Furthermore, Somusic proposes itself as an educational support tool to allow teachers to create online study classes, to monitor the performance of various activities and to interact with students in real-time.

Alfonso Guarino, Delfina Malandrino, Nicola Lettieri, Luca Peppe, Michele Spina, Rocco Zaccagnino.
A Social Platform for Music: Learning and Making Compositions Through Collaboration and Social Interactions.
In: International Conference on Systems and Informatics. p. 1004-1009, Shanghai, China, November 2-4, 2019, doi: 10.1109/ICSAI48974.2019.9010436.

Roberto De Prisco, Delfina Malandrino, Luca Peppe, Michele Spina, Rocco Zaccagnino.
SoMusic! A collaborative Web platform for composing, sharing, and learning music.
Submitted to World Wide Web Journal, Springer. ISSN: 1386-145X (Print) 1573-1413.

ScoreConductor

ScoreConductor is a customizable system based on neural networks for the continuous gesture recognition of a music conductor. The system exploits the fact the gestures are for musical conduction and this implies that they follow a somewhat specific pattern. Although we have trained and tested the system for a particular set of gestures, the recognizer is customizable in the sense that the user decides the set of gestures that have to be recognized and trains the system before using it. ScoreConductor is implemented in Java. The hardware required is only an infrared pen, a Nintendo Wii remote controller and a personal computer.

Roberto De Prisco, Paolo Sabatino, Gianluca Zaccagnino, Rocco Zaccagnino.
A Customizable Recognizer for Orchestral Conducting Gestures Based on Neural Networks.
EvoApplications (2) 2011: 254-263.

EvoBassComposer

EvoBassComposer is a multi-objective genetic algorithm for the unfigured bass harmonization: a bass line is given and the composer has to write other 3 voices to have a complete 4-voice piece of music with a 4-note chord for each bass note. Solving such a problem means finding appropriate chords to use for each bass note and also find a placement of the four notes within each chord so that melodic concerns are addressed, especially for the highest voice (soprano). EvoBassComposer automatically composes music when provided with a bass line input. The objectives considered are two: the harmonic objective (finding appropriate chords) and the melodic objective (find good melodic lines).

Roberto De Prisco, Gianluca Zaccagnino, Rocco Zaccagnino.
EvoBassComposer:a multi-objective genetic algorithm for 4-voice compositions.
GECCO 2010: 817-818.

Roberto De Prisco, Rocco Zaccagnino.
An Evolutionary Music Composer Algorithm for Bass Harmonization.
EvoWorkshops 2009: 567-572.