Virtual Design Studio
The Virtual Design Studio might enable multiple-location live collaborative events in which there are up to three classes of participants: “facilitators” (leading the process or session); “creators” (a larger group of participants who can generate design material in the context of the event); “reactors” (a group of participants who experience and provide feedback). The general task is to create , try out, and modify designs in any one of a variety of fields, from stage and exhibit design to costumes, fashion and architecture. Requirements might include: The facilitator assigns roles and moves participants in and out of the center stage, perhaps with a wand or motion capture system; Designs are displayed in multiple views to all participants, either on multiple screens or in some kind of immersive viewing arrangement; Creators can create in real-time at workstations or (better) with wand, virtual stylus, motion capture, etc.; In some cases, tactile, pressure, smell, or other sensors might convey the unique quality of a design or creation; Reactors can react both as a group and individually using feedback devices or through wide-angle motion sensing; in some cases, reactors may become creators, or shift their individual viewpoints of the designs separately from other reactors; reactions can be aggregated; “Co-creation” with large groups of creators/reactors is an interesting...
Read MoreVirtual Classroom
The Virtual Classroom scenario would enable multiple-location live events in which there are up to four classes of participants: “lead teacher“; “visiting teacher”; “performing student”; “observing student”. The goal is to enable a range of class types, including multiple groups of students in different locations. Requirements might include: The lead teacher assigns roles and moves participants in and out of the center stage, perhaps with a wand or motion capture system; the facilitator can pass the facilitator role to the visiting teacher. A visualization system allows basic sketching or illustration of concepts; this might include whiteboarding, sequences of slides, a photo, audio or video library, etc. up to moving through a 3D immersive environment; Students can react both as a group and individually using feedback devices or through wide-angle motion sensing; reactions can be aggregated and used to drive interactive content; Students in multiple locations can see and speak to each other individually in some cases, or break out into smaller groups to work together; Students can take the stage individually or in a small group to present work or perform; Students in some types of classes can create work in class and share it with the class for feedback, similar to the Virtual Design Studio...
Read MoreVirtual Ideation Workshop
The “Virtual Ideation Workshop” scenario would enable multiple-location live events in which there are two classes of participants: “facilitators” (leading the process or session); “ideators” (a larger group of participants who can generate ideas and concepts in context of the event). The general task is to enable brainstorming and group problem-solving in any one of a variety of fields, from the arts to social change and business innovation. Requirements might include: The facilitator assigns roles and moves participants in and out of the center stage, perhaps with a wand or motion capture system; the facilitator can pass the facilitator role to any of the other ideators; A visualization system allows basic sketching or illustration of concepts; this might include whiteboarding, sequences of slides, a photo, audio or video library, etc. up to moving through a 3D immersive environment; Ideally navigation is under the control of a wand or motion capture interface so it is both visible and intuitive; Ideators can react both as a group and individually using feedback devices or through wide-angle motion sensing; reactions can be aggregated to help determine the direction of the session; Ideators may need to break out quickly into different discussion groups, e.g. talking directly to each other, rather than through the facilitator and group...
Read MoreVirtual Multisite Exhibits
The “Virtual Multisite Exihibit” scenario might consist of multiple-location exhibits in which there is some live interaction between participants in different spaces. There might be three classes of participants: “visitors” (walking through the space and interacting with exhibits and each other); “docents” (guiding or informing the interactions); and “performers” (e.g. actor/interpreters embedded in the exhibit experience). There are many possible variations on this general concept. For example: Two exhibits in different locations are linked via a combination of videoconferencing and motion sensing. Some exhibits in each space are “windows” into the other space, allowing visitors to see each other and see the different environment of the second space. Some exhibits are windows into other private spaces, in which performers are seen doing things related to the exhibit theme; they demonstrate awareness of the visitors and interact with them. Some exhibits are affected by visitors walking by, and these effects are reflected or aggregated between the locations. Visitors might be tagged or profiled with icons / avatars, and be tracked through the spaces on a map which combines both spaces. Docents appear in some of the exhibits or remote spaces, gather a group for discussion, and follow the group (virtually) from exhibit to...
Read MoreVirtual Performance Platform
A “Virtual Performance Platform” would enable multiple-location live events in which there are two classes of participants: “lead performers” (e.g. actors, dancers, presenters); “audience performers” (a larger group of participants who also interact in the context of the event). The “content” of the event (characters, story, setting, even the type of interaction … dance, theater, narrative installation, etc.) is flexible and easy to create and modify. This general concept leads to the following preliminary requirements: The lead performer in each location wears a high-resolution motion capture system, which supplies data to drive (for example) a virtual body in real time.- The virtual bodies interact in a shared virtual space which is visible in each location. A lower-resolution sensing system captures data of some kind from the “audience performers”; ideally, each audience member has some individual input or distinguishable impact on the data. The data from the audience sensing system is also represented in the shared virtual space. The particular representations of the lead performer and audience performer data, as well as the virtual setting, need to be easy to create and modify, so this should be based on widely available content creation...
Read More