Live
- Flirty Texts That Turn into Dates
- ‘Asli’ Sonakshi shares a glimpse of herself in ‘golden hour’
- Shraddha Kapoor took ‘thepla’ for her ‘foreign’ trip
- ChatGPT Now Supports Video Inputs: All Details
- B.Com vs BBA: Which degree better prepares you for an MBA?
- Apple Intelligence now features Image Playground, Genmoji
- National Energy Conservation Day: Fostering a sustainable future
- RG Kar issue: Day-long protests in Kolkata today on bail to Ghosh, Mondal
- Plans afoot to bring EPFO service at par with banking: LabourSecy
- vivo unveils X200 series of mobiles
Just In
Computers controlled with a smile or a blink to come. Scientists are developing new technologies that will allow computers to recognise non-verbal commands such as gestures, body language and facial expressions.
Scientists are developing new technologies that will allow computers to recognise non-verbal commands such as gestures, body language and facial expressions.
For most people, using a computer is limited to clicking, typing, searching, and, thanks to Siri and similar software, verbal commands. "Compare that with how humans interact with each other, face to face - smiling, frowning, pointing, tone of voice all lend richness to communication," researchers said.
The new project titled "Communication Through Gestures, Expression and Shared Perception," aims to revolutionise everyday interactions between humans and computers. "Current human-computer interfaces are still severely limited," said Professor Bruce Draper, from Colorado State University (CSU), who is leading the project.
"First, they provide essentially one-way communication: users tell the computer what to do. This was fine when computers were crude tools, but more and more, computers are becoming our partners and assistants in complex tasks. Communication with computers needs to become a two-way dialogue," said Draper.
The team has proposed creating a library of what are called Elementary Composable Ideas (ECIs). Like little packets of information recognisable to computers, each ECI contains information about a gesture or facial expression, derived from human users, as well as a syntactical element that constrains how the information can be read.
To achieve this, the researchers have set up a Microsoft Kinect interface. A human subject sits down at a table with blocks, pictures and other stimuli. The researchers try to communicate with and record the person's natural gestures for concepts like "stop," or, "huh?" "We don't want to say what gestures you should use," Draper said.
"We want people to come in and tell us what gestures are natural. Then, we take those gestures and say, 'OK, if that's a natural gesture, how do we recognise it in real time, and what are its semantics? What roles does it play in the conversation? When do you use it? When do you not use it?'" Draper said.
Their goal: making computers smart enough to reliably recognise non-verbal cues from humans in the most natural, intuitive way possible. According to the project proposal, the work could someday allow people to communicate more easily with computers in noisy settings, or when a person is deaf or hard of hearing, or speaks another language.
The project, which falls broadly under Defence Advanced Research Projects Agency (DARPA)'s basic research arm, is focused on enabling people to talk to computers through gestures and expressions in addition to words, not in place of them, researchers said.
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com