AI System for Music Generation Based on User Preferences
- Version
- Download 10
- File Size 427.41 KB
- File Count 1
- Create Date 21 November 2025
- Last Updated 21 November 2025
AI System for Music Generation Based on User Preferences
Omswaroop T M1, Rathan S2, Poornesh D3, Sreeya Krishna4, Tanni Saha Puja5, Dr. Zunaid Rasool6, Mr. Lanke Ravi Kumar7
1 2 3 4 5 6 7Department of Computer Science and Engineering, JAIN (Deemed-to-be-University)
Abstract - The intersection of Computational Creativity and Music Information Retrieval (MIR) presents unique challenges in automating music generation while maintaining emotional coherence. While Deep Learning models like Generative Adversarial Networks (GANs) and Transformers have achieved state-of-the-art results in symbolic music generation, they often suffer from high computational costs, "black-box" un-interpretability, and a lack of closed-loop feedback. This paper proposes a lightweight, transparent, rule-based framework for affective melody generation coupled with a deterministic validation engine. The system utilizes a constrained stochastic process (Random Walk) to generate MIDI sequences based on Western music theory, which are immediately synthesized into audio waveforms. Simultaneously, a Digital Signal Processing (DSP) module extracts spectral features—specifically Spectral Centroid, Bandwidth, and RMS energy—to classify the generated audio into "Energetic" or "Calm" affective states. Experimental validation demonstrates that this architecture successfully enforces harmonic consonance while providing objective, quantifiable feedback on the emotional timbre of the generated composition, achieving a 92% classification accuracy against target moods.
Keywords— Music Information Retrieval (MIR), Spectral Feature Extraction, Affective Computing, Digital Signal Processing.
Download