Toggle light / dark theme

World’s first human brain-scale neuromorphic supercomputer is coming

ICYMI: DeepSouth uses a #neuromorphiccomputing system which mimics biological processes, using hardware to efficiently emulate large networks of spiking #neurons at 228 trillion #Synaptic operations per second — rivalling the estimated rate of operations in the human brain.


Australian researchers are putting together a supercomputer designed to emulate the world’s most efficient learning machine – a neuromorphic monster capable of the same estimated 228 trillion synaptic operations per second that human brains handle.

As the age of AI dawns upon us, it’s clear that this wild technological leap is one of the most significant in the planet’s history, and will very soon be deeply embedded in every part of our lives. But it all relies on absolutely gargantuan amounts of computing power. Indeed, on current trends, the AI servers NVIDIA sells alone will likely be consuming more energy annually than many small countries. In a world desperately trying to decarbonize, that kind of energy load is a massive drag.

But as often happens, nature has already solved this problem. Our own necktop computers are still the state of the art, capable of learning super quickly from small amounts of messy, noisy data, or processing the equivalent of a billion billion mathematical operations every second – while consuming a paltry 20 watts of energy.

Google researchers make AI tech solve math puzzles “beyond human knowledge”

Artificial intelligence researchers claim to have made the world’s first genuine scientific discovery using a large language model (LLM), which is behind ChatGPT and similar programs. This signals a major breakthrough.

The discovery was made by Google DeepMind, an AI research laboratory where scientists are investigating whether LLMs can do more than just repackage information learned in training and actually generate new insights.

It turns out that they can, and the implications are potentially huge. DeepMind said in a blog post that its FunSearch, a method to search for new solutions in mathematics and computer science, made “the first discoveries in open problems in mathematical sciences using LLMs.”

U.S. and China race to shield secrets from quantum computers

No one knows who might get there first. The United States and China are considered the leaders in the field; many experts believe America still holds an edge.

As the race to master quantum computing continues, a scramble is on to protect critical data. Washington and its allies are working on new encryption standards known as post-quantum cryptography – essentially codes that are much harder to crack, even for a quantum computer. Beijing is trying to pioneer quantum communications networks, a technology theoretically impossible to hack, according to researchers. The scientist spearheading Beijing’s efforts has become a minor celebrity in China.

Quantum computing is radically different. Conventional computers process information as bits – either 1 or 0, and just one number at a time. Quantum computers process in quantum bits, or “qubits,” which can be 1, 0 or any number in between, all at the same time, which physicists say is an approximate way of describing a complex mathematical concept.

AI Takes Over The Classroom: Alpha Helping Solve The Teacher Shortage

The teacher shortage crisis is a major concern, casting a shadow on educational quality across the globe. In this academic climate, the rise of AI in the classroom sparks both hope and skepticism. Alpha school is leading the way, devoid of traditional teachers and reliant on its AI-powered curriculum and “guide” system. This innovative approach offers a glimpse of a promising future where technology and human ingenuity merge to redefine education.

AI has become a game-changer in education by customizing learning experiences according to students’ individual learning styles and paces. Alpha’s app-based tutoring system is a prime example of this. It is personalized for each student’s strengths and weaknesses, a significant departure from the traditional “one-size-fits-all” classroom approach. For instance, consider a child who struggles with math concepts. AI can modify the exercises and explanations to suit their learning style, enabling them to understand the material better.

Moreover, this AI-driven education system offers instant and detailed feedback, which may be lacking in some schools. Such immediate response fosters a deeper understanding and encourages a more engaged learning process. This level of individualized attention is a powerful tool for enhancing knowledge and engagement.

DeepMind AI outdoes human mathematicians on unsolved problem

For the first time ever, researchers show how a large language model can help discover novel solutions to long-standing problems in math and computer science.


The card game Set has long inspired mathematicians to create interesting problems.

Now, a technique based on large language models (LLMs) is showing that artificial intelligence (AI) can help mathematicians to generate new solutions.

The AI system, called FunSearch, made progress on Set-inspired problems in combinatorics, a field of mathematics that studies how to count the possible arrangements of sets containing finitely many objects. But its inventors say that the method, described in Nature on 14 December1, could be applied to a variety of questions in maths and computer science.

Human Brain Cells on a Chip Can Recognize Speech And Do Simple Math

There is no computer even remotely as powerful and complex as the human brain. The lumps of tissue ensconced in our skulls can process information at quantities and speeds that computing technology can barely touch.

Key to the brain’s success is the neuron’s efficiency in serving as both a processor and memory device, in contrast to the physically separated units in most modern computing devices.

There have been many attempts to make computing more brain-like, but a new effort takes it all a step further – by integrating real, actual, human brain tissue with electronics.

A Ball of Brain Cells on a Chip Can Learn Simple Speech Recognition and Math

The mini-brain functioned like both the central processing unit and memory storage of a supercomputer. It received input in the form of electrical zaps and outputted its calculations through neural activity, which was subsequently decoded by an AI tool.

When trained on soundbites from a pool of people—transformed into electrical zaps—Brainoware eventually learned to pick out the “sounds” of specific people. In another test, the system successfully tackled a complex math problem that’s challenging for AI.

The system’s ability to learn stemmed from changes to neural network connections in the mini-brain—which is similar to how our brains learn every day. Although just a first step, Brainoware paves the way for increasingly sophisticated hybrid biocomputers that could lower energy costs and speed up computation.