Conventions of Control in Interactive Architecture
Thursday, March 8, 2012
Document Links Fixed....
Correct links are up.... first time writing html, should have known there would be an error.
Wednesday, March 7, 2012
Hypothesis and next steps...
Hypothesis:
Manipulations of the physical world have the most influence on gestural vocabularies - as shown by the large number of metaphoric gestures in The State of the Art Matrix analyzing mobile and media devices. Metaphoric gestures reference motions related to physical objects in the real world, often with no association to their modern digital adaptations. An example of a metaphoric gesture is shaking a mobile phone in order to exit running applications and return to the home screen. In the physical world, we shake a carton of juice, a bag of popcorn, or even another person, to return the thing to a stable or restored state that is ready for the next task (or ready for eating).
When interacting with physical instead of digital objects in an interactive space, users will even more overwhelmingly prefer metaphoric gestures. However, my hypothesis is that metaphoric gestures are not appropriate in an architectural setting because of the lack of abstraction of reality in an interactive space. I also believe that though sequential gestures (based on existing touch-interface languages) will be popular, they will be ineffective due to the absence of a physical input device. Users will prefer non-metaphoric or responsive gestures that allow additional levels of interaction with the space. A responsive gesture would be the covering of your eyes when it is too bright in the room. This gesture can trigger the closing of window shades. Finally, I consider the possibility for the preference of non-intuitive gestures that exhibit extreme physical movements as a response to the removal of a constraining physical interface device.
My arguments against directional metaphoric gestures:
Using literal metaphoric gestures to interact with architectural components such as doors or windows would require a high level of gesture customization to match the physical acts of manipulating those objects which are designed in many different ways. Doors can have hinges on any one of 12 edges. They can swing, pivot, slide, roll, or fold open or closed. Using a literal metaphoric vocabulary would mean creating a different gesture to manipulate each door configuration.
If you are unlucky enough to not be an architect or someone with technical know-how, you might not be able to figure out how some doors physically work, and thus, you will not be able to operate them even with a simple gesture. Besides, gestural control is supposed to enhance an interactive environment, not burden its users. The ultimate purpose of a gesture is to send a signal. Whether this signal is to open, close, or rotate, it is up to the programming of the computer running in the background and the mechanics of the architecture, to complete the given task - not the user.
Even abstract metaphoric gestures are frequently directional and can subsequently be confusing. A flick to the left to always close any door does not make sense when it closes in the opposite direction once you are on the other side of it. It is best to use non-directional metaphoric gestures in the manipulation of interactive architecture to eliminate confusion among users who encounter various types of the same building
component each day. An example of a non-directional metaphoric gesture: from a fist, pop up your pointer finger, the "number 1" sign. This is metaphoric in that it represents an object popping open, like a jack-in-the-box. Since it is non-directional and not hardware specific, it can be used as a universal gesture to open any type of window or door.
Next steps:
The Architectural matrix lists my proposed gestures for interactive architecture applications. These include: system activation/sleep, "home" to exit out of a command, selection gestures, "option" command applicable to augmented reality applications, generic open/close, generic adjustable open/close, generic 3D rotate, and a few more specific responsive commands (environmentally intuitive). I have left open space to input user-generated gestures to be recorded during the testing phase.
I am most interested in further defining "intuition" as related to the creation of gestural languages. I wish to discover the most referenced categories of relative intuition among test subjects prompted to "invent" gestures for specific architectural tasks. In pursuit of defining a suggested catalogue of gestures for manipulating interactive space, I would like to devise a method to test my proposed gestures against user-generated gestures in the following areas: frequency of "invention", learnability, memorability, performability, efficiency, and opportunity for error.
To test my proposed gestures and provide a setting to generate user input, a test cell must be constructed that contains the following: two or more doors and windows that are configured or manually operated in different ways, one set of rotating vertical blinds on a window, and an optional sliding wall. The components will function as visual cues for the test subjects, and don't even need to be wired to a motor or sensor for initial testing phases where gesture definition generation is most important. A video camera will record user-generated gestures and catalogue successes or fails in the tested areas mentioned above. System feedback can be communicated to the test subjects via sound if the components are not wired for initial testing. A simple beep will alert the user if he has successfully triggered the correct command from the system.
03.07.2012 Progress
State of the Art Matrix loaded.
Gesture Illustrations posted - later, I will make them into a matrix that references the other matrices.
Architectural Vocabulary Matrix loaded.
Concept Diagrams - 2 loaded, more to come.
Don't Click It - a world without buttons
Visit www.dontclick.it to experience a hybrid UI that uses only motion control with a mouse (no clicking). Once you get the hang of navigating the site, it is easy to understand how "sweeping" and "hovering" motions can apply to media interfaces and virtual reality, eliminating the need for a separate 'selection' gesture entirely.
Tuesday, March 6, 2012
03.06.2012 Project Schedule
(Week 10) Wednesday, March 7
Complete State of Art Matrix and first half of Architectural Matrix
Outline proposal of what to test in the architectural test cell
(Week 11) Friday, March 16th
FINAL SUBMISSION for the quarter
*Turn in draft of research paper for ACADIA submission including finalized matrices
begin Kinect coding and set-up
(Spring Break) Wednesday, March 21rd
Set up test cell during this week
Kinect coding and test cell construction
(Week 1) Monday, March 26th
Begin testing student/faculty population
Testing occurs Monday-Friday of this week
(Week 2) Monday, April 2nd
Analyze testing results
Turn in a final draft of research paper for Michael's review
(Week 3) Tuesday, April 10th
ACADIA submission deadline
03.06.2012 Tasks
Current tasks include:
Refine the state of the art matrix
Create more concept diagrams
Start the coding process to set up Kinect-based experiments
Begin construction in the test cell
Tasks for later:
Compile more of my saved web links onto the research page
Link the site to published research papers that the project will reference
Subscribe to:
Posts (Atom)