Toggle light / dark theme

“Operating and navigating in home environments is very challenging for robots. Every home is unique, with a different combination of objects in distinct configurations that change over time. To address the diversity a robot faces in a home environment, we teach the robot to perform arbitrary tasks with a variety of objects, rather than program the robot to perform specific predefined tasks with specific objects. In this way, the robot learns to link what it sees with the actions it is taught. When the robot sees a specific object or scenario again, even if the scene has changed slightly, it knows what actions it can take with respect to what it sees.

We teach the robot using an immersive telepresence system, in which there is a model of the robot, mirroring what the robot is doing. The teacher sees what the robot is seeing live, in 3D, from the robot’s sensors. The teacher can select different behaviors to instruct and then annotate the 3D scene, such as associating parts of the scene to a behavior, specifying how to grasp a handle, or drawing the line that defines the axis of rotation of a cabinet door. When teaching a task, a person can try different approaches, making use of their creativity to use the robot’s hands and tools to perform the task. This makes leveraging and using different tools easy, allowing humans to quickly transfer their knowledge to the robot for specific situations.

Historically, robots, like most automated cars, continuously perceive their surroundings, predict a safe path, then compute a plan of motions based on this understanding. At the other end of the spectrum, new deep learning methods compute low-level motor actions directly from visual inputs, which requires a significant amount of data from the robot performing the task. We take a middle ground. Our teaching system only needs to understand things around it that are relevant to the behavior being performed. Instead of linking low-level motor actions to what it sees, it uses higher-level behaviors. As a result, our system does not need prior object models or maps. It can be taught to associate a given set of behaviors to arbitrary scenes, objects, and voice commands from a single demonstration of the behavior. This also makes the system easy to understand and makes failure conditions easy to diagnose and reproduce.”

Deep Learning (DL) performs classification tasks using a series of layers. To effectively execute these tasks, local decisions are performed progressively along the layers. But can we perform an all-encompassing decision by choosing the most influential path to the output rather than performing these decisions locally?

In an article published today in Scientific Reports, researchers from Bar-Ilan University in Israel answer this question with a resounding “yes.” Pre-existing deep architectures have been improved by updating the most influential paths to the output.

“One can think of it as two children who wish to climb a mountain with many twists and turns. One of them chooses the fastest local route at every intersection while the other uses binoculars to see the entire ahead and picks the shortest and most significant route, just like Google Maps or Waze. The first child might get a , but the second will end up winning,” said Prof. Ido Kanter, of Bar-Ilan’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research.

Summary: Researchers successfully mapped the neural activity of the C. elegans worm, correlating it to its behaviors such as movement and feeding.

Using novel technologies and methodologies, they developed a comprehensive atlas that showcases how most of the worm’s neurons encode its various actions.

This study provides an intricate look into how an animal’s nervous system controls behavior. The team’s findings, data, and models are available on the “WormWideWeb.”

How molecules change when they react to stimuli such as light is fundamental in biology, for example during photosynthesis. Scientists have been working to unravel the workings of these changes in several fields, and by combining two of these, researchers have paved the way for a new era in understanding the reactions of protein molecules fundamental for life.

The large international research team, led by Professor Jasper van Thor from the Department of Life Sciences at Imperial, report their results in the journal Nature Chemistry.

Crystallography is a powerful technique in for taking ‘snapshots’ of how molecules are arranged. Over several large-scale experiments and years of theory work, the team behind the new study integrated this with another technique that maps vibrations in the electronic and nuclear configuration of molecules, called spectroscopy.

Scientists working in connectomics, a research field occupied with the reconstruction of neuronal networks in the brain, are aiming at completely mapping of the millions or billions of neurons found in mammalian brains. In spite of impressive advances in electron microscopy, the key bottleneck for connectomics is the amount of human labor required for the data analysis. Researchers at the Max Planck Institute for Brain Research in Frankfurt, Germany, have now developed reconstruction software that allows researchers to fly through the brain tissue at unprecedented speed. Together with the startup company scalable minds they created webKnossos, which turns researchers into brain pilots, gaining an about 10-fold speedup for data analysis in connectomics.

Billions of nerve cells are working in parallel inside our brains in order to achieve behaviours as impressive as hypothesizing, predicting, detecting, thinking. These neurons form a highly complex network, in which each nerve cell communicates with about one thousand others. Signals are sent along ultrathin cables, called axons, which are sent from each neuron to its about one thousand “followers.”

Only thanks to recent developments in , researchers can aim at mapping these networks in detail. The analysis of such image data, however, is still the key bottleneck in connectomics. Most interestingly, human annotators are still outperforming even the best computer-based analysis methods today. Scientists have to combine human and machine analysis to make sense of these huge image datasets obtained from the electron microscopes.

The need for window-washing humans or robots, therefore, is only going to get bigger in 21st-century cities around the world.

Ozmo combines a flexible robotic arm, artificial intelligence, machine learning and computer vision to clean building facades. It has onboard sensors that can adjust the pressure needed based on the type and thickness of the glass. Onboard LiDAR maps the building facades it is working on in three dimensions. As it moves it calculates its cleaning path hundreds of times per second while adapting to variable external environments by using onboard machine learning. Windy conditions pose no threat. And no humans are at risk as it allows for remote control operations by a handler should Ozmo need to be shut down.

HBP researchers from Germany performed detailed cytoarchitectonic mapping of distinct areas in a human cortical region called frontal operculum and, using connectivity modelling, linked the areas to a variety of different functions including sexual sensation, muscle coordination as well as music and language processing.

The study contributes to the further unravelling of the relationship of the human brain’s structure with function, and is the first proof-of-concept of structural and functional connectivity analysis of the frontal operculum. The newly identified cytoarchitectonic areas have been made publicly available as part of the Julich-Brain Atlas on the EBRAINS platform, inviting for future research to further characterise this brain region.

Based on cell-body stained histological sections in ten postmortem brains (five females and five males), HBP researchers from Heinrich Heine University Düsseldorf and Research Centre Jülich identified three new areas in the frontal operculum: Op5, Op6 and Op7. Each of these areas had a distinct cytoarchitecture. Connectivity modelling showed that each area could be ascribed a distinct functional role.