본문 바로가기

추천 검색어

실시간 인기 검색어

학술논문

A Comparative Study of LSTM and Transformer Models in Music Melody Generation

이용수 38

영문명
A Comparative Study of LSTM and Transformer Models in Music Melody Generation
발행기관
Korea Institute for Humanities and Social Sciences(KIHSS)
저자명
郑嘉祥(Jiaxiang ZHENG) 曹墨曦(Moxi CAO)
간행물 정보
『Journal of Global Arts Studies (JGAS)』Vol.1, No.1, 1~10쪽, 전체 10쪽
주제분류
인문학 > 기타인문학
파일형태
PDF
발행일자
2023.12.31
4,000

구매일시로부터 72시간 이내에 다운로드 가능합니다.
이 학술논문 정보는 (주)교보문고와 각 발행기관 사이에 저작물 이용 계약이 체결된 것으로, 교보문고를 통해 제공되고 있습니다.

1:1 문의
논문 표지

국문 초록

【背景】近年来,利用深度学习模型生成音乐已经发展为AI 音乐的主流方向,但音乐生成任务的主流模型仍然存在着一些问题,其中最大的问题就是不能有效地模拟音乐结构,使计算机创作出符合音乐结构的乐曲。【目的】为此,我们需要探究哪些模型能够很好地模拟音乐的结构,创造更加人性化的音乐。【方法】我们通过对比实验,对比LSTM 与Transformer 模型所生成音乐的优点与缺点,并在此基础上提出改进方案。【结果】实验结果证明,LSTM 在较短的序列上模拟音乐结构的表现优于Transformer,但其无法处理过长的序列;而Transformer 在处理较长序列的表现优于LSTM,并通过改进后能在较长的序列上有效地模拟音乐结构,创作出符合人类音乐听觉的乐曲。【结论】因此我们认为,Transformer 模型更加适合AI 音乐作曲任务,并在未来通过改进其注意力机制来提高音乐结构的识别能力是音乐生成的主流方向。

영문 초록

[Background] In recent years, using deep learning models to generate music has become the mainstream direction in AI music. However, the main models for music generation still face several issues, the biggest of which is the inability to effectively simulate musical structure, hindering computers from creating compositions that conform to musical structures. [Objective] To address this, we need to explore which models can effectively simulate the structure of music and create more humanized music. [Method]We conduct comparative experiments, analyzing the advantages and disadvantages of music generated by LSTM and Transformer models, and propose improvements based on the findings. [Results] Experimental results demonstrate that LSTM performs better than Transformer in simulating musical structure in shorter sequences, but struggles with longer sequences; whereas Transformer outperforms LSTM in handling longer sequences and can effectively simulate musical structure in longer sequences after improvements, creating compositions that align with human musical perception. [Conclusion]Therefore, we believe that the Transformer model is more suitable for AI music composition tasks, and improving its attention mechanism to enhance recognition of musical structure will be the mainstream direction for music generation in the future.

목차

1. 前言
2. 研究背景
3. 研究方法
4. 实验
5. 结论
参考文献

키워드

해당간행물 수록 논문

참고문헌

교보eBook 첫 방문을 환영 합니다!

신규가입 혜택 지급이 완료 되었습니다.

바로 사용 가능한 교보e캐시 1,000원 (유효기간 7일)
지금 바로 교보eBook의 다양한 콘텐츠를 이용해 보세요!

교보e캐시 1,000원
TOP
인용하기
APA

郑嘉祥(Jiaxiang ZHENG),曹墨曦(Moxi CAO). (2023).A Comparative Study of LSTM and Transformer Models in Music Melody Generation. Journal of Global Arts Studies (JGAS), 1 (1), 1-10

MLA

郑嘉祥(Jiaxiang ZHENG),曹墨曦(Moxi CAO). "A Comparative Study of LSTM and Transformer Models in Music Melody Generation." Journal of Global Arts Studies (JGAS), 1.1(2023): 1-10

결제완료
e캐시 원 결제 계속 하시겠습니까?
교보 e캐시 간편 결제