Danh mục

Handbook of Multimedia for Digital Entertainment and Arts- P14

Số trang: 30      Loại file: pdf      Dung lượng: 987.68 KB      Lượt xem: 9      Lượt tải: 0    
Hoai.2512

Hỗ trợ phí lưu trữ khi tải xuống: 6,000 VND Tải xuống file đầy đủ (30 trang) 0

Báo xấu

Xem trước 3 trang đầu tiên của tài liệu này:

Thông tin tài liệu:

Handbook of Multimedia for Digital Entertainment and Arts- P14: The advances in computer entertainment, multi-player and online games,technology-enabled art, culture and performance have created a new form of entertainmentand art, which attracts and absorbs their participants. The fantastic successof this new field has influenced the development of the new digital entertainmentindustry and related products and services, which has impacted every aspect of ourlives.
Nội dung trích xuất từ tài liệu:
Handbook of Multimedia for Digital Entertainment and Arts- P14384 K. Brandenburg et al.117. Yang, D., Lee, W.: Disambiguating music emotion using software agents. In: Proc. of the International Conference on Music Information Retrieval (ISMIR). Barcelona, Spain (2004)118. Yoshii, K., Goto, M.: Music thumbnailer: Visualizing musical pieces in thumbnail images based on acoustic features. In: Proceedings of the 9th International Conference on Music Information Retrieval (ISMIR). Philadelphia, Pennsylvania, USA (2008)119. Yoshii, K., Goto, M., Komatani, K., Ogata, T., Okuno, H.G.: Hybrid collaborative and content-based music recommendation using probabilistic model with latent user preferences. In: Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR). Victoria, BC, Canada (2006)120. Yoshii, K., Goto, M., Okuno, H.G.: Automatic drum sound description for real-world mu- sic using template adaption and matching methods. In: Proceedings of the 5th International Music Information Retrieval Conference (ISMIR). Barcelona, Spain (2004)121. Zils, A., Pachet, F.: Features and classifiers for the automatic classification of musical audio signals. In: Proceedings of the 5th International Conference on Music Information Retrieval (ISMIR). Barcelona, Spain (2004)Chapter 17Automated Music Video Generation UsingMulti-level Feature-based SegmentationJong-Chul Yoon, In-Kwon Lee, and Siwoo ByunIntroductionThe expansion of the home video market has created a requirement for video editingtools to allow ordinary people to assemble videos from short clips. However, profes-sional skills are still necessary to create a music video, which requires a stream to besynchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requiresa number of trials to obtain a satisfactory synchronization, which is something thatmost amateurs are unable to achieve. Our aim is automatically to extract a sequence of clips from a video and assemblethem to match a piece of music. Previous authors [8, 9, 16] have approached thisproblem by trying to synchronize passages of music with arbitrary frames in eachvideo clip using predefined feature rules. However, each shot in a video is an artisticstatement by the video-maker, and we want to retain the coherence of the video-maker’s intentions as far as possible. We introduce a novel method of music video generation which is better ableto preserve the flow of shots in the videos because it is based on the multi-levelsegmentation of the video and audio tracks. A shot boundary in a video clip canbe recognized as an extreme discontinuity, especially a change in background or adiscontinuity in time. However, even a single shot filmed continuously with the samecamera, location and actors can have breaks in its flow; for example, actor mightleave the set as another appears. We can use these changes of flow to break a videointo segments which can be matched more naturally with the accompanying music. Our system analyzes the video and music and then matches them. The firstprocess is to segment the video using flow information. Velocity and brightnessJ.-C. Yoon and I.-K. Lee ( )Department of Computer Science, Yonsei University, Seoul, Koreae-mail: media19@cs.yonsei.ac.kr; iklee@yonsei.ac.krS. ByunDepartment of Digital Media, Anyang University, Anyang, Koreae-mail: swbyun@anyang.ac.krB. Furht (ed.), Handbook of Multimedia for Digital Entertainment and Arts, 385DOI 10.1007/978-0-387-89024-1 17, c Springer Science+Business Media, LLC 2009386 J.-C. Yoon et al.features are then determined for each segment. Based on these features, a videosegment is then found to match each segment of the music. If a satisfactory matchcannot be found, the level of segmentation is increased and the matching process isrepeated.Related WorkThere has been a lot of work on synchronizing music (or sounds) with video. Inessence, there are two ways to make a video match a soundtrack: assembling videosegments or changing the video timing. Foote et al. [3] automatically rated the novelty of segment of the metric andanalyzed the movements of the camera in the video. Then they generated a mu-sic video by matching an appropriate video clip to each music segment. Anothersegment-based matching method for home videos was introduced by Hua et al. [8].Amateur videos are usually of low quality and include unnecessary shots. Hua et al.calculated an attention score for each video segment which they used to extractthe more important shots. They analyzed these clips, searching for a beat, andthen they adjusted the tempo of the background music to make it suit the video.Mulhem et al. [16] modeled the aesthetic rules used by real video editors; andused them to assess music videos. Xian et al. [9] used the temporal structures ofthe video and music, as well as repetitive patterns in the music, to generate musicvideos. All these studies treat video segments as primitives to be matched, but they donot consider the flow of the video. Because frames are chosen to obtain the bestsynchronization, significant information contained in complete shots can be missed.This is why we do not extract arbitrary frames from a video segment, but use wholesegments as part of a multi-level resource for assembling a music video. Taking a different approach researches, Jehan et al. [11] suggested a method tocontrol the time domain of a video and to synchronize the feature points of bothvideo and music. Using timing information supplied by the user, they adjusted thespeed of a dance clip by time-warping, so as to synchronize the clip to t ...

Tài liệu được xem nhiều: