Danh mục

Handbook of Multimedia for Digital Entertainment and Arts- P17

Số trang: 30      Loại file: pdf      Dung lượng: 639.08 KB      Lượt xem: 11      Lượt tải: 0    
10.10.2023

Hỗ trợ phí lưu trữ khi tải xuống: 15,000 VND Tải xuống file đầy đủ (30 trang) 0

Báo xấu

Xem trước 3 trang đầu tiên của tài liệu này:

Thông tin tài liệu:

Handbook of Multimedia for Digital Entertainment and Arts- P17: The advances in computer entertainment, multi-player and online games,technology-enabled art, culture and performance have created a new form of entertainmentand art, which attracts and absorbs their participants. The fantastic successof this new field has influenced the development of the new digital entertainmentindustry and related products and services, which has impacted every aspect of ourlives.
Nội dung trích xuất từ tài liệu:
Handbook of Multimedia for Digital Entertainment and Arts- P1721 Projector-Camera Systems in Entertainment and Art 479Physically Viewing InteractionBy projecting images directly onto everyday surfaces, a projector-camera systemmay be used for creating augmentation effects, such as virtually painting the ob-ject surface with a new color, new texture, or even an animation. Users can interactdirectly with such projector-based augmentations. For example, they may observethe object from different sides, while simultaneously experiencing consistent occlu-sion effects and depth, or they can move nearer or further from the object, to seelocal details and global views. Thus, the intuitiveness of physical interaction andadvantages of digital presentation are combined. This kind of physically interactive visualization ability is suitable for use insituations when virtual content is mapped as a texture on real object surfaces.View-dependent visual effects such as highlighting to simulate virtually shiny sur-faces require tracking of the users’ view. Multi-user views can also be supportedby time-multiplexing the projection for multiple users, with each user wearing asynchronized shutter glass allowing the selection of individual views. But this isonly necessary for view-dependent augmentations. Furthermore, view tracking andstereoscopic presentation ability enables virtual objects to be displayed not onlyon the real surface, but also in front of or behind the surface. A general geometricframework to handle all these variants is described in [26]. The techniques described above, only simulate the desired appearance of an aug-mented object which is supposed to remain fixed in space. To make the projectedcontent truly user-interactive, more information apart from viewpoint changes isrequired. After turning an ordinary surface into a display, it is further desirable to ex-tend it to become a user interface with an additional input channel. Thereby, camerascan be used for sensing. In contrast to other input technologies, such as embeddedelectronics for touch screens, tracked wand, or stylus and data gloves often usedin virtual environments; vision-based sensing technology has the flexibility to sup-port different types of inputting techniques without modifying the display surfaceor equipping the users with different devices for different tasks. Differing from in-teraction with special projection screens such as electronically enabled multi-touchor rear-projected screens, some of the primary issues associated with vision-basedinteraction with front-projected interfaces are the illuminations on the detected handand object, as well as cast of shadows. In following subsections, two types of typical interaction approaches with spatialprojector-camera systems will be introduced, namely near distance interaction andfar distance interaction. Vision based interaction techniques will be the main focusand basic interaction operations such as pointing, selecting and manipulation willbe considered.Near Distance InteractionIn near-distance situations where the projection surface is within arm’s length of theuser, finger touching or hand gestures are intuitive ways to select and manipulate the480 O. Bimber and X. Yanginterface. Apart from this, the manipulation of physical objects can also be detectedand used for triggering interaction events. Vision-based techniques may apply a visible light or infrared light camera tocapture the projected surface area. To detect finger touching on a projected surface acalibration process, similar to the geometric techniques presented in section “Geo-metric Image Correction”, is needed to map corresponding pixels between projectorand camera. Next, fingers, hands and objects need to be categorized as part of the foregroundin order to separate them from the projected surface background. When interactionstake place on a front-projected surface, the hand is illuminated by the displayedimages and thus the appearance of a moving hand changes quickly. This renderssegmentation methods, based on skin color or region-growing methods as useless.Frequently, conventional background subtraction methods are also unreliable, sincethe skin color of a hand may become buried in the projected light. One possible solution to this problem is to expand the capacity of the backgroundsubtraction. Despite, its application to an ideal projection screen which assumesenough color differences from skin color as in [27], the background subtractioncan also be used to take into account different background and foreground re-flectance factors. When the background changes significantly, a segmentation mayfail. An image update can be applied to keep the segmentation robust, where anartificial background may be generated from the known input image for a pro-jector with geometric and color distortions corrected between the projector andcamera. Another feasible solution is to detect the changing pixel area between the framesof the captured video to obtain a basic shape of the moving hand or object. Noisecan then be removed using image morphology. Following this, a fingertip can bedetected by convolution with a fingertip-shaped template over the extracted image,as in [28]. To avoid the complex varying illumination problem for visible light, an infraredcamera can be used instead, together with an infrared light source to produce in-visible shadow of a finger on the projected flat surface, as shown in [29]. Theshadow of the finger can then be detected by the infrared camera and can thus besingularly used to detect the finger region and fingertip. To enable screen intera-tion by finger touching, the positioning of the finger, either touching the surfaceor hovering above it, can be further determined by detect ...

Tài liệu được xem nhiều: