In the realm of computing, information is usually perceived as being represented by a binary system of ones and zeros. However, in our everyday lives, we use a decimal system consisting of ten digits to represent numbers. For instance, the number 9 in binary is represented as 1,001, requiring four digits instead of just one in the decimal system.
Today’s quantum computers have emerged from the binary system, but the physical systems that encode their quantum bits (qubits) have the capability to encode quantum digits (qudits) as well. This was recently demonstrated by a team headed by Martin Ringbauer at the University of Innsbruck’s Department of Experimental Physics. According to experimental physicist Pavel Hrmo at ETH Zurich: “The challenge for qudit-based quantum computers has been to efficiently create entanglement between the high-dimensional information carriers.”
In a study published on April 19, 2023, in the journal Nature Communications.
Using the James Webb Space Telescope (JWST) and the Hubble Space Telescope (HST), astronomers from the University of Padua, Italy, and elsewhere have observed a metal-poor globular cluster known as Messier 92. The observations deliver crucial information regarding multiple stellar populations in this cluster. Results were published April 12 on the arXiv pre-print server.
Located some 26,700 light years away in the constellation of Hercules, Messier 92 (or M92 for short) is a GC with a metallicity of just-2.31 and a mass of about 200,000 solar masses. The cluster, estimated to be 11.5 billion years old, is known to host at least two stellar generations of stars—named 1G and 2G. Previous studies have found that Messier 92 has an extended 1G sequence, which hosts about 30.4% of cluster stars, and two distinct groups of 2G stars (2GA and 2GB).
A team of physicists has illuminated certain properties of quantum systems by observing how their fluctuations spread over time. The research offers an intricate understanding of a complex phenomenon that is foundational to quantum computing—a method that can perform certain calculations significantly more efficiently than conventional computing.
“In an era of quantum computing it’s vital to generate a precise characterization of the systems we are building,” explains Dries Sels, an assistant professor in New York University’s Department of Physics and an author of the paper, which is published in the journal Nature Physics. “This work reconstructs the full state of a quantum liquid, consistent with the predictions of a quantum field theory—similar to those that describe the fundamental particles in our universe.”
Sels adds that the breakthrough offers promise for technological advancement.
Last 2020, scientists were able to pick up distinct brain signals that had never been observed before. Such findings hint at how the brain is a more powerful computational device than previously thought.
Distinct Brain Signals
According to Science Alert, back then, researchers from German and Greek institutes were able to report a brain mechanism in the outer cortical cells. They reported their discoveries in the Science journal.
Moiré patterns occur everywhere. They are created by layering two similar but not identical geometric designs. A common example is the pattern that sometimes emerges when viewing a chain-link fence through a second chain-link fence.
For more than 10 years, scientists have been experimenting with the moiré pattern that emerges when a sheet of graphene is placed between two sheets of boron nitride. The resulting moiré pattern has shown tantalizing effects that could vastly improve semiconductor chips that are used to power everything from computers to cars.
A new study led by University at Buffalo researchers, and published in Nature Communications, demonstrated that graphene can live up to its promise in this context.
If I have a visual experience that I describe as a red tomato a meter away, then I am inclined to believe that there is, in fact, a red tomato a meter away, even if I close my eyes. I believe that my perceptions are, in the normal case, veridical—that they accurately depict aspects of the real world. But is my belief supported by our best science? In particular: Does evolution by natural selection favor veridical perceptions? Many scientists and philosophers claim that it does. But this claim, though plausible, has not been properly tested. In this talk, I present a new theorem: Veridical perceptions are never more fit than non-veridical perceptions which are simply tuned to the relevant fitness functions. This entails that perception is not a window on reality; it is more like a desktop interface on your laptop. I discuss this interface theory of perception and its implications for one of the most puzzling unsolved problems in science: the relationship between brain activity and conscious experiences.
Prof. Donald Hoffman, PhD received his PhD from MIT, and joined the faculty of the University of California, Irvine in 1983, where he is a Professor Emeritus of Cognitive Sciences. He is an author of over 100 scientific papers and three books, including Visual Intelligence, and The Case Against Reality. He received a Distinguished Scientific Award from the American Psychological Association for early career research, the Rustum Roy Award of the Chopra Foundation, and the Troland Research Award of the US National Academy of Sciences. His writing has appeared in Edge, New Scientist, LA Review of Books, and Scientific American and his work has been featured in Wired, Quanta, The Atlantic, and Through the Wormhole with Morgan Freeman. You can watch his TED Talk titled “Do we see reality as it is?” and you can follow him on Twitter @donalddhoffman.
Hidden Markov model (HMM) [ 1, 2 ] is a powerful model to describe sequential data and has been widely used in speech signal processing [ 3-5 ], computer vision [ 6-8 ], longitudinal data analysis [ 9 ], social networks [ 10-12 ] and so on. An HMM typically assumes the system has K internal states, and the transition of states forms a Markov chain. The system state cannot be observed directly, thus we need to infer the hidden states and system parameters based on observations. Due to the existence of latent variables, the Expectation-Maximisation (EM) algorithm [ 13, 14 ] is often used to learn an HMM. The main difficulty is to calculate site marginal distributions and pairwise marginal distributions based on the posterior distribution of latent variables. The forward-backward algorithm was specifically designed to tackle this problem. The derivation of the forward-backward algorithm heavily relies on HMM assumptions and probabilistic relationships between quantities, thus requiring the parameters in the posterior distribution to have explicit probabilistic meanings.
Bayesian HMM [ 15-22 ] further imposes priors on the parameters of HMM, and the resulting model is more robust. It has been demonstrated that Bayesian HMM often outperforms HMM in applications. However, the learning process of a Bayesian HMM is more challenging since the posterior distribution of latent variables is intractable. Mean-field theory-based variational inference is often utilised in the E-step of the EM algorithm, which tries to find an optimal approximation of the posterior distribution in a factorised family. The variational inference iteration also involves computing site marginal distributions and pairwise marginal distributions given the joint distribution of system state indicator variables. Existing works [ 15-23 ] directly apply the forward-backward algorithm to obtain these values without justification. This is not theoretically sound and the result is not guaranteed to be correct, since the requirements of the forward-backward algorithm are not met in this case.
In this paper, we prove that the forward-backward algorithm can be applied in more general cases where the parameters have no probabilistic meanings. The first proof converts the general case to an HMM and uses the correctness of the forward-backward algorithm on HMM to prove the claim. The second proof is model-free, which derives the forward-backward algorithm in a totally different way. The new derivation does not rely on HMM assumptions and merely utilises matrix techniques to rewrite the desired quantities. Therefore, this derivation naturally proves that it is unnecessary to make probabilistic requirements on the parameters of the forward-backward algorithm. Specifically, we justify that heuristically applying the forward-backward algorithm in the variational learning of Bayesian HMM is theoretically sound and guaranteed to return the correct result.
How far would you go to keep your mind from failing? Would you go so far as to let a doctor drill a hole in your skull and stick a microchip in your brain?
It’s not an idle question. In recent years neuroscientists have made major advances in cracking the code of memory, figuring out exactly how the human brain stores information and learning to reverse-engineer the process. Now they’ve reached the stage where they’re starting to put all of that theory into practice.
Last month two research teams reported success at using electrical signals, carried into the brain via implanted wires, to boost memory in small groups of test patients. “It’s a major milestone in demonstrating the ability to restore memory function in humans,” says Dr. Robert Hampson, a neuroscientist at Wake Forest School of Medicine and the leader of one of the teams.
As fun as brain-computer interfaces (BCI) are, for the best results they tend to come with the major asterisk of requiring the cutting and lifting of a section of the skull in order to implant a Utah array or similar electrode system. A non-invasive alternative consists out of electrodes which are placed on the skin, yet at a reduced resolution. These electrodes are the subject of a recent experiment by [Shaikh Nayeem Faisal] and colleagues in ACS Applied NanoMaterials employing graphene-coated electrodes in an attempt to optimize their performance.
Although external electrodes can be acceptable for basic tasks, such as registering a response to a specific (visual) impulse or for EEG recordings, they can be impractical in general use. Much of this is due to the disadvantages of the ‘wet’ and ‘dry’ varieties, which as the name suggests involve an electrically conductive gel with the former.
This gel ensures solid contact and a resistance of no more than 5 – 30 kΩ at 50 Hz, whereas dry sensors perform rather poorly at 200 kΩ at 50 Hz with worse signal-to-noise characteristics, even before adding in issues such as using the sensor on a hairy scalp, as tends to be the case for most human subjects.