Recreating The Posture Portraits: "Artistic and Technological (Re)Productions of

Andrea Baldwin, Heidi Henderson, James Lee

By Emily Green '18, Rishma Mendhekar '18, Joseph Castro '19, Laura Pratt, Auburn University '17

This research project was awarded an Ammerman Center grant to conduct research on the posture pictures (nude photos taken of all students that were used to diagnose poor posture), taken in the 1920-1960s at Connecticut College and other New England institutions with the goal of presenting this research and crafting a performance that combines dance and technology.

Using Motion Capture to Synthesize Dance Movements

Baird, B., Izmirli, O., Ajjen Joshi '12 (2011)

Motion capture presents an interesting opportunity for the analysis and synthesis of movements in dance. We have created a tool that uses concatenative synthesis of dance movement based on a library of prerecorded basic movements. Dance movements are first broken into discrete, small movements following the guidelines of Laban dance notation. Then these movements can be performed by dancers and recorded using motion capture. Finally, these (edited) sequences are placed in a 3D virtual environment where the user can synthesize movements to form a choreographed composition. Such a pedagogical tool provides a creative way to understand and study dance movements.


Conducting a Virtual Ensemble

Baird, B., Izmirli, O.

Experienced conductors of music ensembles are not metronomes: their hand movements and the speed at which the players perform exhibit a complex time and context dependent means of communication. The aim of this project is to analyze this relationship between the conductor's movements and the actual tempo as performed by the players, and apply the results from this analysis to construct a computer based system that will mimic the salient behavior of a real ensemble. Models for different conductors are obtained by first having a conductor lead a live ensemble. Data from the conductor is obtained using 3-D position sensors; the performance is also recorded digitally in a sound file. Velocity, acceleration, direction, and position data of the movements are used to extract features and determine the location of beats. By synchronizing this information with the audio data and by using information about music performance and the score, a model is constructed that produces, at each point in time, the implied tempo for the ensemble. Thus this model ultimately deduces implied tempo from hand movements. Once a model has been formed, the system can be put into "perform" mode in which the user can "conduct" in real time by controlling the playback speed of a MIDI sequencer. The conductor uses 3-D trackers to conduct; data from hand movements is fed into the model, processed in real time, and then used to control the tempo of the virtual ensemble. This computer system enables conducting students to experience the complex coupling between movements and actual tempo of the ensemble and to conduct a "virtual" ensemble or a mixed virtual/real ensemble.


Using Haptics and Sound in a Virtual Gallery

Baird, B., Izmirli, O. and David Smalley

Galleries are traditionally places for visual exploration of objects; concert halls provide auditory exploration. The tools of virtual reality allow for a new kind of gallery: one that encompasses features of a traditional visual museum, means for auditory discovery, and in addition, haptic exploration. The user is invited to browse through this virtual gallery, interacting with the objects, feeling their textures, listening to their audio properties, moving around and inside them. All of this takes place in an interactive, 3D environment where the user navigates and explores with her eyes, ears, and hands.