Menu

Blog

Archive for the ‘robotics/AI’ category: Page 5

Sep 6, 2024

Scientists create army of tiny robots that can be injected into the human body

Posted by in categories: biotech/medical, nanotechnology, robotics/AI

Researchers said the study showed nanobots had the potential to transport drugs to precise locations.

Sep 6, 2024

Tiny magnetic robots could treat bleeds in the brain

Posted by in categories: biotech/medical, robotics/AI

The development could enable precise, relatively low-risk treatment of brain aneurysms, which cause around 500,000 deaths globally each year. The medical condition – a blood-filled bulge on a brain artery that can rupture and cause fatal bleeds – can also lead to stroke and disability.

The study points to a future where tiny robots could be remotely controlled to carry out complex tasks inside the human body – such as targeted drug delivery and organ repair – in a minimally invasive way, researchers say.

Sep 6, 2024

AI helps distinguish dark matter from cosmic noise

Posted by in categories: cosmology, robotics/AI

Dark matter is the invisible force holding the universe together—or so we think. It makes up about 85% of all matter and around 27% of the universe’s contents, but since we can’t see it directly, we have to study its gravitational effects on galaxies and other cosmic structures. Despite decades of research, the true nature of dark matter remains one of science’s most elusive questions.

Sep 6, 2024

Fish scale-inspired design boosts concrete crack resistance by 63%

Posted by in categories: materials, robotics/AI

Humans are still learning from nature.

Researchers mimicked ancient fish scales for a 3D-printed concrete structure:

Continue reading “Fish scale-inspired design boosts concrete crack resistance by 63%” »

Sep 6, 2024

New neural framework enhances reconstruction of high-resolution images

Posted by in categories: robotics/AI, space

Deep learning (DL) has significantly transformed the field of computational imaging, offering powerful solutions to enhance performance and address a variety of challenges. Traditional methods often rely on discrete pixel representations, which limit resolution and fail to capture the continuous and multiscale nature of physical objects. Recent research from Boston University (BU) presents a novel approach to overcome these limitations.

As reported in Advanced Photonics Nexus, researchers from BU’s Computational Imaging Systems Lab have introduced a local conditional neural field (LCNF) network, which they use to address the problem. Their scalable and generalizable LCNF system is known as “neural phase retrieval”—” NeuPh” for short.

NeuPh leverages advanced DL techniques to reconstruct high-resolution phase information from low-resolution measurements. This method employs a convolutional neural network (CNN)-based encoder to compress captured images into a compact latent-space representation.

Sep 6, 2024

Lumen Orbit 🚀 Data Centers in Space

Posted by in categories: robotics/AI, solar power, space, sustainability

🚀 LumenOrbit (YC S24) is building a network of megawatt-scale data centers in space, scalable to gigawatt capacity.

Why we should train AI in space.

Continue reading “Lumen Orbit 🚀 Data Centers in Space” »

Sep 6, 2024

Scientists invent nanorobots that can repair brain aneurysms

Posted by in categories: biotech/medical, robotics/AI

Tiny robots much smaller than blood cells could deliver clot-forming drugs where they’re needed most, a study in rabbits suggests. The tech has yet to be tested in humans.

Sep 6, 2024

Three reasons robots are about to become way more useful

Posted by in category: robotics/AI

We’re inching ever-closer to them being able to handle household tasks.

Sep 5, 2024

The Signals in Your Brain that Tell You When It’s Time to Move

Posted by in category: robotics/AI

A new study, published in “Nature Communications” this week, led by Jake Gavenas PhD, while he was a PhD student at the Brain Institute at Chapman University, and co-authored by two faculty members of the Brain Institute, Uri Maoz and Aaron Schurger, examines how the brain initiates spontaneous actions. In addition to demonstrating how spontaneous action emerges without environmental input, this study has implications for the origins of slow ramping of neural activity before movement onset—a commonly-observed but poorly understood phenomenon.

In their study, Gavenas and colleagues propose an answer to that question. They simulated spontaneous activity in simple neural networks and compared this simulated activity to intracortical recordings of humans when they moved spontaneously. The study results suggest something striking: many rapidly fluctuating neurons can interact in a network to give rise to very slow fluctuations at the level of the population.

Imagine, for example, standing atop a high-dive platform and trying to summon the willpower to jump. Nothing in the outside world tells you when to jump; that decision comes from within. At some point you experience deciding to jump and then you jump. In the background, your brain (or, more specifically, your motor cortex) sends electrical signals that cause carefully coordinated muscle contractions across your body, resulting in you running and jumping. But where in the brain do these signals originate, and how do they relate to the conscious experience of willing your body to move?

Sep 5, 2024

AI-Assisted Police Reports and the Challenge of Generative Suspicion

Posted by in categories: law, robotics/AI

This article delves into a transformative shift in the criminal justice system brought on by the use of AI-assisted police reports.


Police reports play a central role in the criminal justice system. Many times, police reports exist as the only official memorialization of what happened during an incident, shaping probable cause determinations, pretrial detention decisions, motions to suppress, plea bargains, and trial strategy. For over a century, human police officers wrote the factual narratives that shaped the trajectory of individual cases and organized the entire legal system.

All that is about to change with the creation of AI-assisted police reports. Today, with the click of a button, generative AI Large Language Models (LLMS) using predictive text capabilities can turn the audio feed of a police-worn body camera into a pre-written draft police report. Police officers then fill-in-the blanks of inserts and details like a “Mad Libs” of suspicion and submit the edited version as the official narrative of an incident.

Continue reading “AI-Assisted Police Reports and the Challenge of Generative Suspicion” »

Page 5 of 2,355First23456789Last