Toggle light / dark theme

Japanese researches control Gundam robot using their mind.


Japanese scientists have created a device that controls a mini toy Gundam robot using the human mind, turning one of the anime’s most exciting technological concepts into reality.

The researchers customized a Zaku Gundam robot toy available through Bandai’s Zeonic Technics, but buyers have to manually program their robot using a smartphone app.

Another argument for government to bring AI into its quantum computing program is the fact that the United States is a world leader in the development of computer intelligence. Congress is close to passing the AI in Government Act, which would encourage all federal agencies to identify areas where artificial intelligences could be deployed. And government partners like Google are making some amazing strides in AI, even creating a computer intelligence that can easily pass a Turing test over the phone by seeming like a normal human, no matter who it’s talking with. It would probably be relatively easy for Google to merge some of its AI development with its quantum efforts.

The other aspect that makes merging quantum computing with AI so interesting is that the AI could probably help to reduce some of the so-called noise of the quantum results. I’ve always said that the way forward for quantum computing right now is by pairing a quantum machine with a traditional supercomputer. The quantum computer could return results like it always does, with the correct outcome muddled in with a lot of wrong answers, and then humans would program a traditional supercomputer to help eliminate the erroneous results. The problem with that approach is that it’s fairly labor intensive, and you still have the bottleneck of having to run results through a normal computing infrastructure. It would be a lot faster than giving the entire problem to the supercomputer because you are only fact-checking a limited number of results paired down by the quantum machine, but it would still have to work on each of them one at a time.

But imagine if we could simply train an AI to look at the data coming from the quantum machine, figure out what makes sense and what is probably wrong without human intervention. If that AI were driven by a quantum computer too, the results could be returned without any hardware-based delays. And if we also employed machine learning, then the AI could get better over time. The more problems being fed to it, the more accurate it would get.

For those who are excited about 6G. 😃


Electromagnetic waves are characterized by a wavelength and a frequency; the wavelength is the distance a cycle of the wave covers (peak to peak or trough to trough, for example), and the frequency is the number of waves that pass a given point in one second. Cellphones use miniature radios to pick up electromagnetic signals and convert those signals into the sights and sounds on your phone.

4G wireless networks run on millimeter waves on the low- and mid-band spectrum, defined as a frequency of a little less (low-band) and a little more (mid-band) than one gigahertz (or one billion cycles per second). 5G kicked that up several notches by adding even higher frequency millimeter waves of up to 300 gigahertz, or 300 billion cycles per second. Data transmitted at those higher frequencies tends to be information-dense—like video—because they’re much faster.

The 6G chip kicks 5G up several more notches. It can transmit waves at more than three times the frequency of 5G: one terahertz, or a trillion cycles per second. The team says this yields a data rate of 11 gigabits per second. While that’s faster than the fastest 5G will get, it’s only the beginning for 6G. One wireless communications expert even estimates 6G networks could handle rates up to 8,000 gigabits per second; they’ll also have much lower latency and higher bandwidth than 5G.

An international team of researchers has developed a multifunctional skin-mounted microfluidic device that is able to measure stress in people in multiple ways. In their paper published in Proceedings of the National Academy of Sciences, the group describes their device and how it could be useful.

Prior research has shown that can damage a person’s health. It can lead to diabetes, depression, obesity and a host of other problems. Some have suggested that one of the ways to combat stress is to create a means for alerting a person to their heightened stress so that they might take action to reduce it. To that end, prior teams have developed skin-adhesive devices that that collect sweat samples. The tiny samples contain small amounts of cortisol, a hormone that can be used as a marker of stress levels. In this new effort, the researchers have improved on these devices by developing one that measures more than just cortisol levels and is much more comfortable.

The researchers began with the notion that in order to convince people to wear a full time, it had to be both useful and comfortable. The solved the latter issue by making their device out of soft materials that adhere gently to the skin. They also used a skeletal design for their microfluidic sweat-collection apparatus—a flexible mesh. They also added more functionality. In addition to cortisol, their device is able to measure glucose and vitamin C levels. They also added electrodes underneath that are able to measure sweat rate and electrical conductivity of the skin, both of which change in response to stress. They also added a wireless transmitter that sends all of the data to a nearby smartphone running the device’s associated app.

Australian radio host Annabelle Brett, who works at Mix 1–6.3 Canberra, cleverly used her Tesla mobile app to mess with the two would-be thieves who attempted to steal her Model 3.

The theft attempt started when the radio host received notifications on her phone early one morning stating that her Model 3’s alarm was triggered. After receiving the notifications, Brett went to her locked garage where she had parked her Model 3 and discovered it was missing.

Unfortunately for the would-be thieves, Brett’s Model 3 is equipped with safety features like Sentry Mode, which continuously monitors a Tesla’s surroundings when it is left unattended. Through these features, footage from the car’s suite of cameras could be retrieved.

Why is one particular intellectual capacity valued over so many other worthy qualities, like compassion, honesty, courage, and common sense?


A t some point during the past decade, Harvard professor Michael Sandel started to notice the increasingly frequent invocation of a particular word: “smart.” The term was being applied to all manner of products and devices: smart phones, smart cars, smart thermostats, even smart toasters. He also heard the word creeping into the language of politics, employed to justify and promote governmental initiatives. “The way the word was being used bothered me,” Sandel says. “It seemed to pair a narrow kind of technocratic expertise with an attitude of smug superiority.”

Political philosopher that he is, Sandel decided to conduct an analysis of presidential rhetoric. Before the 1980s, he found, American presidents rarely used the word “smart” in their public speeches. Ronald Reagan and George H.W. Bush employed the term relatively sparingly. But the use of the word in presidential remarks “exploded” during the administrations of Bill Clinton and George W. Bush, Sandel reported, with each man uttering the word “smart” at least 450 times. Barack Obama spoke it more than 900 times, and Hillary Clinton often invoked the term both as Secretary of State and as a candidate running for the highest office. This “rhetorical tic,” Sandel came to recognize, was representative of a much more sweeping cultural change, one he addresses with concern in his new book, “The Tyranny of Merit.” Over the past 40 years, he observes, America’s ruling class has exalted one quality, one virtue, one human attribute above all others: smartness.

Ultra high-res displays for gadgets and tv sets may be coming. 😃


By expanding on existing designs for electrodes of ultra-thin solar panels, Stanford researchers and collaborators in Korea have developed a new architecture for OLED—organic light-emitting diode—displays that could enable televisions, smartphones and virtual or augmented reality devices with resolutions of up to 10,000 pixels per inch (PPI). (For comparison, the resolutions of new smartphones are around 400 to 500 PPI.)

Such high-pixel-density displays will be able to provide stunning images with true-to-life detail—something that will be even more important for headset displays designed to sit just centimeters from our faces.

The advance is based on research by Stanford University materials scientist Mark Brongersma in collaboration with the Samsung Advanced Institute of Technology (SAIT). Brongersma was initially put on this research path because he wanted to create an ultra-thin solar panel design.

Researchers led by Technische Universität Kaiserslautern (TUK) and the University of Vienna successfully constructed a basic building block of computer circuits using magnons to convey information, in place of electrons. The ‘magnonic half-adder’ described in Nature Electronics, requires just three nanowires, and far less energy than the latest computer chips.

A team of physicists are marking a milestone in the quest for smaller and more energy-efficient computing: they developed an integrated circuit using magnetic material and magnons to transmit binary data, the 1s and 0s that form the foundation of today’s computers and smartphones.

The new circuit is extremely tiny, with a streamlined, 2-D design that requires about 10 times less energy than the most advanced computer chips available today, which use CMOS technology. While the current magnon configuration is not as fast as CMOS, the successful demonstration can now be explored further for other applications, such as quantum or neuromorphic computing.