Toggle light / dark theme

Summary: A new study combines deep learning with neural activity data from mice to unlock the mystery of how they navigate their environment.

By analyzing the firing patterns of “head direction” neurons and “grid cells,” researchers can now accurately predict a mouse’s location and orientation, shedding light on the complex brain functions involved in navigation. This method, developed in collaboration with the US Army Research Laboratory, represents a significant leap forward in understanding spatial awareness and could revolutionize autonomous navigation in AI systems.

The findings highlight the potential for integrating biological insights into artificial intelligence to enhance machine navigation without relying on GPS technology.

We developed Significant Latent Factor Interaction Discovery and Exploration (SLIDE), an interpretable machine learning approach that can infer hidden states (latent factors) underlying biological outcomes. These states capture the complex interplay between factors derived from multiscale, multiomic datasets across biological contexts and scales of resolution.

In this first article in a series on philosophy and science, we take a look at materialism and why it is fundamental to science.

A short disclaimer before we read further: I’m a materialist. Materialism is a branch of philosophy to which the sciences, particularly the physical and life sciences, owe a lot. Materialism posits that the material world — matter — exists, and everything in the Universe, including consciousness, is made from or is a product of matter. An objective reality exists and we can understand it. Without materialism, physics, chemistry, and biology as we know it wouldn’t exist.

Another branch of philosophy, idealism, is in direct contradiction to materialism. Idealism states that, instead of matter, the mind and consciousness are fundamental to reality; that they are immaterial and therefore independent of the material world.

A collaborative project has made a breakthrough in enhancing the speed and resolution of widefield quantum sensing, leading to new opportunities in scientific research and practical applications.

By collaborating with scientists from Mainland China and Germany, the team has successfully developed a technology using a neuromorphic vision sensor, which is designed to mimic the human vision system. This sensor is capable of encoding changes in fluorescence intensity into spikes during optically detected (ODMR) measurements.

The key advantage of this approach is that it results in highly compressed data volumes and reduced latency, making the system more efficient than traditional methods. This breakthrough in quantum sensing holds potential for various applications in fields such as monitoring dynamic processes in biological systems.

In an age of increasingly advanced robotics, one team has well and truly bucked the trend, instead finding inspiration within the pinhead-sized brain of a tiny flying insect in order to build a robot that can deftly avoid collisions with very little effort and energy expenditure.

An insect’s tiny brain is an unlikely source of biomimicry, but researchers from the University of Groningen in the Netherlands and Bielefeld University in Germany believed it was an ideal system to apply to how robots move. Fruit flies (Drosophila melanogaster) possess remarkably simple but effective navigational skills, using very little brainpower to swiftly travel along invisible straight lines, then adjusting accordingly – flying in a line angled to the left or the right – to avoid obstacles.

With such a tiny brain, the fruit fly has limited computational resources available to it while in flight – a biological model, the scientists believed, that could be adapted to use in the ‘brain’ of a robot for efficient, low-energy and obstacle-avoiding locomotion.

The Schwartz Reisman Institute for Technology and Society and the Department of Computer Science at the University of Toronto, in collaboration with the Vector Institute for Artificial Intelligence and the Cosmic Future Initiative at the Faculty of Arts & Science, present Geoffrey Hinton on October 27, 2023, at the University of Toronto.

0:00:00 — 0:07:20 Opening remarks and introduction.
0:07:21 — 0:08:43 Overview.
0:08:44 — 0:20:08 Two different ways to do computation.
0:20:09 — 0:30:11 Do large language models really understand what they are saying?
0:30:12 — 0:49:50 The first neural net language model and how it works.
0:49:51 — 0:57:24 Will we be able to control super-intelligence once it surpasses our intelligence?
0:57:25 — 1:03:18 Does digital intelligence have subjective experience?
1:03:19 — 1:55:36 Q&A
1:55:37 — 1:58:37 Closing remarks.

Talk title: “Will digital intelligence replace biological intelligence?”

Abstract: Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but in return they allow exactly the same model to be run on physically different pieces of hardware, which makes the model immortal. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and mimic biology by using very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation makes it possible to run many copies of exactly the same model on different pieces of hardware. Thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently by averaging their weight changes. That is why chatbots like GPT-4 and Gemini can learn thousands of times more than any one person. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large-scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us. The fact that digital intelligences are immortal and did not evolve should make them less susceptible to religion and wars, but if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it, so the most urgent research question in AI is how to ensure that they never want to take control.

Plant leaves come in many different shapes, sizes and complexities. Some leaves are large and smooth, while others are smaller and serrated. Some leaves grow in single pieces while others form multiple leaflets. These variations in leaf structure play a crucial role in how plants adapt—and survive—in different environments.

“Plant morphology is diverse in nature,” said Zhongchi Liu, a professor emerita in the University of Maryland’s Department of Cell Biology and Molecular Genetics. “Morphological differences contribute to plant survival, including how well plants can regulate their temperatures and how efficiently they can transport water from their roots to the rest of their bodies.

Understanding the mechanisms responsible for diverse leaf forms will lead to a better understanding of how plants can survive challenging conditions.