Open In App

New Trends In Human Computer Interaction

Improve
Improve
Like Article
Like
Save
Share
Report

The main purpose of studying human-computer interaction is to develop techniques such that they enhance the way users can interact with their computers and make them more intuitive. The use of physical devices like keyboards, mouse for human-computer interaction hinders the intuitiveness and naturalness of the interface as there is a strong barrier between the user and computer. With the development of ubiquitous computing, current user human-computer Interaction is not just limited to keyboard and mouse interaction. Direct use of hands as an input device is an attractive method for providing natural human-computer interaction rather than traditional text-based interfaces through graphical-based user interfaces. Although the market for hand gesture-based interface design is huge, building a robust hand gesture recognition system remains a challenging problem for traditional vision-based approaches. Hence, this hand gesture recognition system which can efficiently track both static and dynamic hand gestures would be an intuitive and natural interface for the users with their computers. 

This system translates the detected gesture into actions such as opening websites, launching applications, and many more with very minimal hardware. Another approach to make this interaction more intuitive would be the help of gaze gestures where a head-mounted display (HMD) is used which is a portable interactive display device that can track eye movement as means of interaction. This technique is highly effective and very effortless to the user because humans can freely control their eye movements. Hence eye-tracking technology can be used as a method for HCI. At present, HCIs have been well established for gestures and voice input but this kind of gesture-based HCI method is unsuitable when both hands are occupied or in environments where speech is not an option. 

Thus, a simpler and more effective method to approach HCI with HMDs is crucial. Also, this system achieves HMD-based gaze interaction using an HMD webcam to detect and track the human gaze direction in real-time at a close range and to analyze the user’s intent based on gaze. In recent years, HCI research based on gaze gestures has emerged and is increasing rapidly. 

Methodology For Hand Gesture Recognition For Human-Computer Interaction

In this method, when the user gives a gesture to the system it instantly captures the image of the hand gesture with the help of its camera module. Then the image is converted to a grayscale image with the help of various gray-scaling algorithms. This gray-scaled image is then processed for noise removal and smoothening of the image. 

Otsu binarization automatically calculates a threshold value from image histogram for a bimodal image, which is an image whose histogram has two peaks Thresholding is applied to the image to obtain a binary image from the processed grayscale image which is done by setting threshold value which is obtained from the binarization and converting all pixels to either black or white based on its value either less than or greater than that threshold value to achieve greater accuracy. Contour extraction is performed for object detection. The convex hull is found along with convexity defects. Depending upon these defects, gestures are then recognized. 

For gestures like palm and first where there are no convexity defects, Haar cascade classifier is used where a collection of positive images, a minimum of 10 original images, taken at different lighting conditions and angles is used for recognizing these gestures. Based on these gestures actions are then mapped to each gesture. Finally, the application which is mapped to a particular gesture gets launched. 

Advantages:

  • It is an intuitive and natural way of interaction.
  • More user-friendly.
  • Recognizes both static and dynamic hand movements.
  • Fast and sufficiently reliable as a recognition system.
  • Easily implemented in real-time systems.
  • These gestures are customizable, and any task can be assigned to each gesture.
  • It has minimal hardware requirements.
  • Low cost.

Disadvantages:

  • Accuracy of recognition will drop if there is any involvement of a complex background.
  • Irrelevant objects with hands can mislead the recognition system.
  • The system might require the hand to be vertical and fingers pointing exactly to the camera.
  • The performance of this system drops as the distance between user and camera increases.
  • Ambient light effect color detection reduces system performance.
  • It still doesn’t produce an interface that can replace physical controllers.

Methodology For Gaze Gesture And Their Applications In Human-Computer Interaction With a Head-Mounted Display

This method involves the use of eye-tracking which is a technique for measuring the gaze point of human eyes and their degree of movement relative to head pose. This system achieves an HMD based gaze interaction style to detect and track human gaze direction in real-time at a close range. 

The process starts with photographing the eye with the help of near eyed camera integrated into the HMD to calculate the gaze direction. The system will customize the range of the head pose distribution to directly provide head pose information. It collects the data based on the range, and these data are consistent with the image captured by our HMD system. Using the pupil as the center the image is magnified by some factor. The pupil coordinates are made as center coordinates and randomly reduce the number of pixels and finally, Gaussian filtering is applied to the image. Then two modules of deep convolutional neural networks models are used to classify the image’s gaze trajectories. This network contains data from almost ten thousand gaze trajectories collected from various individuals. 

Due to the gap between the feature distributions in synthetic images and those of real images, learning from synthetic images may not achieve the expected performance. To bridge the gap between a synthetic image distribution and a real image distribution this network uses a model pre-trained on the Net to learn from large amounts of data and then train the resulting model. Using real data solves the data distribution problem and enhances the recognition of the system while retaining the labeling information. These varieties of gaze gestures are finally mapped to a specific function that the user opts for.  

Advantages:

  • This system is very robust and user-friendly.
  • Very helpful in situations where hands are preoccupied, and speech is not an option.
  • Eye gaze movement is significantly faster than any other gesture movement.
  • This technique collects large amounts of accurate and precise data for gesture recognition.
  • Enhanced accuracy of recognition with the help of the neural network.
  • Uses two neural networks in parallel to map different features to ensure consistency between the obtained results.
  • Can adapt to various lighting conditions either indoor or outdoor.
  • Can act as a means of interface for disabled people.

Disadvantages:

  • New users might tend to draw inaccurate patterns.
  • Requires some adoption time for the user to get used to the interface.
  • Relative positions of eyes and camera vary from person to person.
  • Contact lenses, glasses can all impact camera’s ability to track eye movements.
  • Eye-tracking and training the data set for the neural net can be expensive.

Use Of These Techniques In Real Time Scenarios:

1. Hand Gesture: There is a huge potential for the development of new kinds of interface designs that refine how we used to interact with computers previously. There is a lot of research going on the development of these kinds of interfaces. There is a huge market for hand gesture-based interfaces. This kind of interface has many practical applications where the user can easily act without needing to reach for a physical controlling device. 

For example in launching products for a company they use these hand gesture movements to go to the next slide avoiding a physical clicker in their hands. This kind of interface can be very effective for small smart mobile devices and other smart wearable devices like smartwatches where the interface is limited and often have very small screens restricting the users to use other kinds of the interface rather than a touch screen. 

Hence, this kind of hand gesture movement can be implemented in such devices with the help of infrared sensors. These gestures can be used for basic but frequent functionalities like launching certain applications, increasing or decreasing the volume, skipping songs, calling a particular preset contact, etc. and such functions can be easily implemented. This creates a convenient interface for the user with the device that has limited physical area hence a visual interface will be cumbersome for the user to interact with the device. 

2. Gaze Gesture: This technology has a huge demand in other interface paradigms like Augmented reality and Virtual reality (AR and VR) where the tracking of eye movement is used for various functions. Traditional interaction methods can be unsuitable in some environments where hands are preoccupied, and speech may not be an option. 

At this time, gaze gestures can be used since these applications are completely free of any kind of physical interaction devices except for the head mount which is only for the tracking of the eye movement of the user. Since these devices require only a free and convenient form of interaction for the user which gazes gesture interface offers, this technology can be extensively used in these fields. These kinds of technologies are even being currently used like we can take the example of Microsoft HoloLens. 

In smart devices like a smart lens, wear spectacles are used as a smart device to display information, gaze gestures can be used as a way of interface for navigation and other user-specified functions. Also, this method has a high accuracy of eye-tracking and hence can be used as a drawing tool in a virtual environment. These can also be used to detect the drowsiness of the user and can warn the user if wanted. Another scenario where tracking of the user’s eye is important is commerce where detecting which product has grabbed the attention of the user watching the advertisement. 



Last Updated : 03 Dec, 2021
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads