A flexible, paper-like ceramic material has been created that promises to provide an inexpensive, fireproof, non-conductive base for a whole range of new and innovative electronic devices (Credit: Eurakite). View gallery (4 images)
Materials to make hard-wearing, bendable non-conducting substrates for wearables and other flexible electronics are essential for the next generation of integrated devices. In this vein, researchers at the University of Twente have reformulated ceramic materials so that they have the flexibility of paper and the lightness of a polymer, but still retain exceptional high-temperature resistance. The new material has been dubbed flexiramics.
High-tech materials such as flexible polymers show promise in this regard, as does boron nitride, and may eventually make the cheaper, but more brittle insulators – such as those made from traditional ceramics – a thing of the past. However, the new ceramic material, named flexiramics, could give these new materials a run for their money as it is not only a tissue-like material that is easy to fold without breaking, it is also reportedly inexpensive and easy to produce.
BOSTON — When evaluating wearables, IT can’t leave out augmented and virtual reality devices, which are poised to have a major effect on the enterprise.
MasterCard is bringing the future of commerce to life with virtual and augmented reality commerce experiences and payment enabled wearables at the. Arnold Palmer Invitational Presented by MasterCard (API) in Orlando, FL. Soon, golf fans may be able to shop for Graeme McDowell’s equipment and G-Mac apparel, while teeing off with him on a virtual fairway. Or, while out on the course, golfers might simply tap their golf glove at the point-of-sale to buy refreshments from the beverage cart—no wallet required.
Wearables and other connected devices have been available to help treat chronic conditions like asthma and heart disease for a while now. But thus far, the nation’s 30 million diabetics haven’t seen much to help them improve their health or reduce the daily grind of finger pricks and needle pokes.
The $2.5 billion connected-care industry may be off to a late start in diabetes, but it’s making up for lost time. A new breed of connected glucometers, insulin pumps and smartphone apps is hitting the market. They promise to make it easier for diabetics to manage the slow-progressing disease and keep them motivated with feedback and support. In as little as two years, the industry plans to take charge of the entire uncomfortable, time-consuming routine of checking and regulating blood-sugar levels with something called an artificial pancreas. Such systems mimic the functions of a healthy pancreas by blending continuous glucose monitoring, remote-controlled insulin pumps and artificial intelligence to maintain healthy blood-sugar levels automatically.
For Jeroen Tas, CEO of Philips’ Connected Care and Health Informatics unit, diabetes management is also personal: his daughter Kim is diabetic.
Virtual and augmented reality is taking giant leaps every day, both in the mainstream and in research labs. In a recent TechEmergence interview, Biomedical Engineer and Founder of g.tec Medical Engineering Christopher Guger said the next big steps will be in brain-computer interfaces (BCIs) and embodiment.
If you’re unfamiliar with the term, embodiment is the moment when a person truly “feels” at one with a device controlled by their thoughts, while sensing that device as a part of, or an extension, of themselves. While researchers are taking big strides toward that concept, Guger believes those are only baby steps toward what is to come.
While augmented or virtual reality can take us away for a brief period, Guger said true embodiment will require far more BCI development. There has been a lot of work recently in robotic embodiment using BCI.
“We have the robotic system, which is learning certain tasks. You can train the robotic system to pick up objects, to play a musical instrument and, after the robotic system has learned, you’re just giving the high-level command for the robotic system to do it for you,” he said. “This is like a human being, where you train yourself for a certain task and you have to learn it. You need your cortex and a lot of neurons to do the task. Sometimes, it’s pre-programmed and (sometimes) you’re just making the high-level decision to do it.”
Another tool at work in the study of embodiment is what Guger called “virtual avatars.” These virtual avatars allow researchers to experiment with embodiment and learn both how avatars need to behave, while also helping humans grow more comfortable with the concept of embodiment inside the avatar. Being at ease inside the avatar, he said, makes it easier for one to learn tasks and train, or re-train, for specific functions.
As an example, Guger cited a stroke patient working to regain movement in his hand. Placing the patient into a virtual avatar, the patient can “see” the hand of the avatar moving in the same manner that he wants his own hand to move. This connection activates mirror neurons in the patient’s brain, which helps the brain rewire itself to regain a sense of the hand.
“We also do functional electrical stimulation (where) the hand is electrically stimulated, so you also get the same type of movement. This, altogether, has a very positive effect on the remobilization of the patient,” Guger said. “Your movement and the virtual movement, that’s all feeding back to the artificial systems in the cortex again and is affecting brain plasticity. This helps people learn to recover faster.”
One hurdle that researchers are still working to overcome is the concept of “break in presence” (discussed in the article under the sub-heading ‘head-tracking module’). Basically, this is the moment where one’s immersion in a virtual reality world is interrupted by an outside influence, leading to the loss of embodiment. Avoiding that loss of embodiment, he said, is what researchers are striving to attain to make virtual reality a more effective technology.
Though Guger believes mainstream BCI use and true embodiment is still a ways off, other applications of BCI and embodiment are already happening in the medical field. In addition to helping stroke patients regain their mobility, there are BCI systems that allow doctors to do assessments of brain activity on coma patients, which provides some level of communication for both the patient and the family. Further, ALS patients are able to take advantage of BCI technology to improve their quality of life through virtual movement and communication.
“For the average person on the street, it’s very important that the BCI system is cheap and working, and it has to be faster or better compared to other devices that you might have,” he said. “The embodiment work shows that you can really be embodied in another device; this is only working if you are controlling it mentally, like the body is your own, because you don’t have to steer the keyboard or the mouse. It’s just your body and it’s doing what you want it to do. And then you gain something.”
Many opportunities in the VR/ AR space for enterprise Apps, Platforms, and services. Over the years we all have seen many opportunities missed where companies did not do the proper value map assessment and apply their finding to their own prod roadmaps. I personally have created my own value map of VR & AR opportunities across various industries and their biz caps.; and hope that others have done the same around this technology.
But augmented reality might be the best stepping stone, Hardware, Gadgets, Developer, Internet of Things, Wearables, Google, HTC, Fujitsu, Epson.
The Internet full of incredible DIY projects that make you wish you had the years of experience required to build your own Batmobile, flaming Mad Max guitar, or hoverboard. Thankfully with the underlit miniskirt, we’ve come across a DIY item that looks awesome and is still easy to make.
This wearable was inspired by the Hikaru skirt, a programmable LED miniskirt that took certain corners of the Japanese Internet by storm earlier this year.
A team of Stanford researchers have developed a novel means of teaching artificial intelligence systems how to predict a human’s response to their actions. They’ve given their knowledge base, dubbed Augur, access to online writing community Wattpad and its archive of more than 600,000 stories. This information will enable support vector machines (basically, learning algorithms) to better predict what people do in the face of various stimuli.
“Over many millions of words, these mundane patterns [of people’s reactions] are far more common than their dramatic counterparts,” the team wrote in their study. “Characters in modern fiction turn on the lights after entering rooms; they react to compliments by blushing; they do not answer their phones when they are in meetings.”
In its initial field tests, using an Augur-powered wearable camera, the system correctly identified objects and people 91 percent of the time. It correctly predicted their next move 71 percent of the time.
K-Glass, smart glasses reinforced with augmented reality (AR) that were first developed by the Korea Advanced Institute of Science and Technology (KAIST) in 2014, with the second version released in 2015, is back with an even stronger model. The latest version, which KAIST researchers are calling K-Glass 3, allows users to text a message or type in key words for Internet surfing by offering a virtual keyboard for text and even one for a piano.
Currently, most wearable head-mounted displays (HMDs) suffer from a lack of rich user interfaces, short battery lives, and heavy weight. Some HMDs, such as Google Glass, use a touch panel and voice commands as an interface, but they are considered merely an extension of smartphones and are not optimized for wearable smart glasses. Recently, gaze recognition was proposed for HMDs including K-Glass 2, but gaze is insufficient to realize a natural user interface (UI) and experience (UX), such as user’s gesture recognition, due to its limited interactivity and lengthy gaze-calibration time, which can be up to several minutes.
As a solution, Professor Hoi-Jun Yoo and his team from the Electrical Engineering Department recently developed K-Glass 3 with a low-power natural UI and UX processor to enable convenient typing and screen pointing on HMDs with just bare hands. This processor is composed of a pre-processing core to implement stereo vision, seven deep-learning cores to accelerate real-time scene recognition within 33 milliseconds, and one rendering engine for the display.