Friday, 8 May 2009

Pervasive Computing Definition

GROUP A's Definition of Pervasive ComputingPervasive computing or ubiquitous computing as it is also known is a form of human computer interaction that aims to move beyond the standard use of the computer, keyboard and mouse. It aims to use the theory of computing with a user of being interactive without them actually knowing they are.

It can be seem as machines that are starting to fit the human environment more instead of the user being forced to enter a computing environment.

Its seen as 'COMPUTING DEVICES ARE EMBEDDED EVERYWERE'!!

Any computing technology that takes the user away from the single workstation (PC) is seen as pervasive computing. This in turn inspires application development away from the desktop!

It also breaks the paradigm of a standard PC and the way in which we interact with the physical world.

Pervasive computing has also led to important changes to input and output interactions which define our computing experience!

Multi-sensory feedback - usability and HCI problems

Multi-sensory feedback technology is widely adopted in virtual reality environments. In order to evaluate its usability proper tests should be performed. In the study of Zhang et. al. (2007) an assessment of multi-sensory feedback on the usability of a Virtual Assembly Environment was conducted. VAE in the manufacturing industry, allows easy evaluation of assembly-related engineering decisions through analysis, predictive models, visualisation and data presentation. In addition, it has the potential to factor the human elements and considerations of the completed product without needing a physical realisation. For the non-command interface an optical tracking system was applied which provides a real time measurement of the position and orientation of the tracked targets i.e. user’s head and hands.

The experiment assessed 3 different attributes of the system: efficiency of use, user satisfaction and reliability. In order to evaluate the attributes task completion times, questionnaires and human performance error rates were used respectively. A group of 16 people was invited to conduct the tests. The research assumed several hypotheses:

• The use of visual feedback can lead to better usability than neutral condition (neither visual nor auditory feedback)

• The use of 3D auditory feedback can lead to better usability than the neutral condition
• The use of integrated feedback (both visual and auditory) cal lead to better usability than
either feedback used in isolationThe results of the experiment show that the use of multi-sensory feedback(integrated) improves the efficiency of use (as the tasks were completed much faster), user satisfaction (more participants prefer the integrated feedback and it is more helpful) and reliability (mistakes, slips, lapses and mode errors happen less frequently with the integrated feedback). Apart from the evaluation of the usability, the multi-sensory feedback still has some design issues that need to be overcome.

The design issues of spatial input (Hinckley et. al.) might be grouped in two categories: human perception and ergonomics and facility of interaction. Multi-sensory feedback is essential to the understanding of space (Gibson). However, some of the appliances used in a virtual reality interface may not provide appropriate feedback for the user. According to the experiment (Hinckley et. al.) with a virtual flashlight, the glove-based input might be helpful but there are some kinematics constraints of a hand that does not allow certain movements.

For example: it is easier to rotate something held in fingers that to rotate the whole hand itself. Additionally, the mass of a tool is quite important as it can damp instabilities in the user’s hand motion which is very important i.e. for the surgeons. As far as ergonomics is concerned, several problems may occur. In case of complex applications such as document editor, direct manipulation needs to be constrained by techniques like girding. Appropriate feedback mechanisms still need to be developed for use in spatial interfaces.

In short hand, “The design hurdle is this: provide an interface which effectively integrates rapid, imprecise, multiple degree-of-freedom object placement with slower, but more precise object placement, while providing feedback that makes it all comprehensible.”In a nutshell, the usability evaluation of multi-sensory feedback technology is possible after the product is launched. It is done by the people who will buy or use the product. Because of the financial reasons it might be a risky venture, but it is essential for the development of pervasive computing. In most cases, applied multi-sensory feedback technologies are successful and helpful in our life. The last example presents how people’s sensory processing disorder can be overcome with the use of new technologies.

ConclusionI think that Multi Sensory Feedback stimulates human imagination far more then the traditional intuitive interfaces. A movement is done with respect to ones inner intention which adds to the “naturalness” of a particular experience. Multi Sensory Feedback enhances the users’ experiences into one that is more “realistic” and more “responsive” to what they intend. Therefore I believe Multi Sensory Feedback is the way forward in technology as it can improve various aspects of our lives without many drawbacks

Different uses of Multi-Sensory Feedback

EDUCATION-In education multi-sensory feedback can be used to increase students’ abilities and skills. There are teaching sessions called “Orton-Gillingham” which involves constant interaction between the teacher and the student and the simultaneous use of multiple sensory input channels”. With this interaction and the use of multiple input channels (hearing, sight and touch senses) not only an optimal learning is achieved, but it enhances memory storage. Multi-sensory becomes a greater value when students maybe impaired such as sight, mental or physical impairment. In this case the efforts would be closely monitored and more than one sensory input will be used.

Picture 1



http://www.youtube.com/user/twardURS

MUSIC: Another example of multi-sensory interface can be “reactable” an interface for producing music. Its functionality is based on moving and rotating physical objects in a luminous table which draws different shapes and plays different tones according to the movements of these physical objects. The feedback received by the user is both the light and the music produced by the movements, that is, touch, hearing and sight are used in this interface. In the following video the basics of that user interface, how it works and the feedback it produces can be appreciated.

Picture 2



http://www.youtube.com/watch?v=MPG-LYoW27E&feature=player_embedded[25 October 2006]http://www.youtube.com/watch?v=vm_FzLya8y4&feature=related

AIRFORCE: A pilot can improve the way of capturing the attention of unexpected changes without affecting his performance while piloting. If the sight was the only sense to highlight important system status or unexpected changes, the pilot would lose concentration. Instead, if the wide range of sensory modalities available to human beings such as hearing, sight and touch were used, an effective multi-sensory feedback would be provided to the pilots’ attention “without affecting their performance on current tasks” such as sound.

Pecture 3


Figure 3: A training cockpitSource:

http://www.confederationc.on.ca/ace/images/Frasca-B58-Beech-Baron-cockpit.jpg

GAMING: Together with visual and sound effects they are used to create a more realistic game experience. In racing games the wheel reacts differently for each surface and maneuver that user performs. It is the same with joystick and flight simulators – it can simulate bad weather conditions and turbulences.