Toggle light / dark theme

THE COMPUTATIONAL UNIVERSE: MODELLING COMPLEXITY — Stephen Wolfram PHD #52

Does the use of computer models in physics change the way we see the universe? How far reaching are the implications of computation irreducibility? Are observer limitations key to the way we conceive the laws of physics?
In this episode we have the difficult yet beautiful topic of trying to model complex systems like nature and the universe computationally to get into; and how beyond a low level of complexity all systems, seem to become equally unpredictable. We have a whole episode in this series on Complexity Theory in biology and nature, but today we’re going to be taking a more physics and computational slant.
Another key element to this episode is Observer Theory, because we have to take into account the perceptual limitations of our species’ context and perspective, if we want to understand how the laws of physics that we’ve worked out from our environment, are not and cannot be fixed and universal but rather will always be perspective bound, within a multitude of alternative branches of possible reality with alternative possible computational rules. We’ll then connect this multi-computational approach to a reinterpretation of Entropy and the 2nd law of thermodynamics.
The fact that my guest has been building on these ideas for over 40 years, creating computer language and AI solutions, to map his deep theories of computational physics, makes him the ideal guest to help us unpack this topic. He is physicist, computer scientist and tech entrepreneur Stephen Wolfram. In 1987 he left academia at Caltech and Princeton behind and devoted himself to his computer science intuitions at his company Wolfram Research. He’s published many blog articles about his ideas, and written many influential books including “A New kind of Science”, and more recently “A Project to Find the Fundamental Theory of Physics”, and “Computer Modelling and Simulation of Dynamic Systems”, and just out in 2023 “The Second Law” about the mystery of Entropy.
One of the most wonderful things about Stephen Wolfram is that, despite his visionary insight into reality, he really loves to be ‘in the moment’ with his thinking, engaging in socratic dialogue, staying open to perspectives other than his own and allowing his old ideas to be updated if something comes up that contradicts them; and given how quickly the fields of physics and computer science are evolving I think his humility and conceptual flexibility gives us a fine example of how we should update how we do science as we go.

What we discuss:
00:00 Intro.
07:45 The history of scientific models of reality: structural, mathematical and computational.
14:40 Late 2010’s: a shift to computational models of systems.
20:20 The Principle of Computational Equivalence (PCE)
24:45 Computational Irreducibility — the process that means you can’t predict the outcome in advance.
27:50 The importance of the passage of time to Consciousness.
28:45 Irreducibility and the limits of science.
33:30 Godel’s Incompleteness Theorem meets Computational Irreducibility.
42:20 Observer Theory and the Wolfram Physics Project.
45:30 Modelling the relations between discrete units of Space: Hypergraphs.
47:30 The progress of time is the computational process that is updating the network of relations.
50:30 We ’make’ space.
51:30 Branchial Space — different quantum histories of the world, branching and merging.
54:30 We perceive space and matter to be continuous because we’re very big compared to the discrete elements.
56:30 Branchial Space VS Many Worlds interpretation.
58:50 Rulial Space: All possible rules of all possible interconnected branches.
01:07:30 Wolfram Language bridges human thinking about their perspective with what is computationally possible.
01:11:00 Computational Intelligence is everywhere in the universe. e.g. the weather.
01:19:30 The Measurement problem of QM meets computational irreducibility and observer theory.
01:20:30 Entanglement explained — common ancestors in branchial space.
01:32:40 Inviting Stephen back for a separate episode on AI safety, safety solutions and applications for science, as we did’t have time.
01:37:30 At the molecular level the laws of physics are reversible.
01:40:30 What looks random to us in entropy is actually full of the data.
01:45:30 Entropy defined in computational terms.
01:50:30 If we ever overcame our finite minds, there would be no coherent concept of existence.
01:51:30 Parallels between modern physics and ancient eastern mysticism and cosmology.
01:55:30 Reductionism in an irreducible world: saying a lot from very little input.

References:
“The Second Law: Resolving the Mystery of the Second Law of Thermodynamics”, Stephen Wolfram.

“A New Kind of Science”, Stephen Wolfram.

Observer Theory article, Stephen Wolfram.

Observer Theory

Engineers develop blueprint for robot swarms, mimicking bee and ant construction

Bees, ants and termites don’t need blueprints. They may have queens, but none of these species breed architects or construction managers. Each insect worker, or drone, simply responds to cues like warmth or the presence or absence of building material. Unlike human manufacturing, the grand design emerges simply from the collective action of the drones—no central planning required.

Now, researchers at Penn Engineering have developed mathematical rules that allow virtual swarms of tiny robots to do the same. In , the robots built honeycomb-like structures without ever following—or even being able to comprehend—a plan.

“Though what we have done is just a first step, it is a new strategy that could ultimately lead to a new paradigm in manufacturing,” says Jordan Raney, Associate Professor in Mechanical Engineering and Applied Mechanics (MEAM), and the co-senior author of a new paper in Science Advances. “Even 3D printers work step by step, resulting in what we call a brittle process. One simple mistake, like a clogged nozzle, ruins the entire process.”

Information Processing via Human Soft Tissue: Soft Tissue Reservoir Computing

Physical reservoir computing refers to the concept of using nonlinear physical systems as computational resources to achieve complex information processing. This approach exploits the intrinsic properties of physical systems such as their nonlinearity and memory to perform computational tasks. Soft biological tissues possess characteristics such as stress-strain nonlinearity and viscoelasticity that satisfy the requirements of physical reservoir computing. This study evaluates the potential of human soft biological tissues as physical reservoirs for information processing. Particularly, it determines the feasibility of using the inherent dynamics of human soft tissues as a physical reservoir to emulate nonlinear dynamic systems. In this concept, the deformation field within the muscle, which is obtained from ultrasound images, represented the state of the reservoir. The findings indicate that the dynamics of human soft tissue have a positive impact on the computational task of emulating nonlinear dynamic systems. Specifically, our system outperformed the simple LR model for the task. Simple LR models based on raw inputs, which do not account for the dynamics of soft tissue, fail to emulate the target dynamical system (relative error on the order of <inline-formula xmlns:mml=“http://www.w3.org/1998/Math/MathML” xmlns:xlink=“http://www.w3.org/1999/xlink”> <tex-math notation=“LaTeX”>$10^{-2}$ </tex-math></inline-formula>). By contrast, the emulation results obtained using our system closely approximated the target dynamics (relative error on the order of <inline-formula xmlns:mml=“http://www.w3.org/1998/Math/MathML” xmlns:xlink=“http://www.w3.org/1999/xlink”> <tex-math notation=“LaTeX”>$10^{-3}$ </tex-math></inline-formula>). These results suggest that the soft tissue dynamics contribute to the successful emulation of the nonlinear equation. This study suggests that human soft tissues can be used as a potential computational resource. Soft tissues are found throughout the human body. Therefore, if computational processing is delegated to biological tissues, it could lead to a distributed computation system for human-assisted devices.

Universal framework enables custom 3D point spread functions for advanced imaging

Engineers at the UCLA Samueli School of Engineering have introduced a universal framework for point spread function (PSF) engineering, enabling the synthesis of arbitrary, spatially varying 3D PSFs using diffractive optical processors. The research is published in the journal Light: Science & Applications.

This framework allows for advanced imaging capabilities—such as snapshot 3D —without the need for spectral filters, axial scanning, or digital reconstruction.

PSF engineering plays a significant role in modern microscopy, spectroscopy and computational imaging. Conventional techniques typically employ phase masks at the pupil plane, which constrain the complexity and mathematical representation of the achievable PSF structures.

Decoding high energy physics with AI and machine learning

In the world of particle physics, where scientists unravel the mysteries of the universe, artificial intelligence (AI) and machine learning (ML) are making waves with how they’re increasing understanding of the most fundamental particles. Central to this exploration are parton distribution functions (PDFs). These complex mathematical models are crucial for predicting outcomes of high energy physics experiments that test the Standard Model of particle physics.

Understanding quantum computing’s most troubling problem—the barren plateau

For the past six years, Los Alamos National Laboratory has led the world in trying to understand one of the most frustrating barriers that faces variational quantum computing: the barren plateau.

“Imagine a landscape of peaks and valleys,” said Marco Cerezo, the Los Alamos team’s lead scientist. “When optimizing a variational, or parameterized, , one needs to tune a series of knobs that control the solution quality and move you in the landscape. Here, a peak represents a bad solution and a valley represents a good solution. But when researchers develop algorithms, they sometimes find their model has stalled and can neither climb nor descend. It’s stuck in this space we call a barren .”

For these quantum computing methods, barren plateaus can be mathematical dead ends, preventing their implementation in large-scale realistic problems. Scientists have spent a lot of time and resources developing quantum algorithms only to find that they sometimes inexplicably stall. Understanding when and why barren plateaus arise has been a problem that has taken the community years to solve.

The Center of Our Universe Does Not Exist. A Physicist Explains Why

About a century ago, scientists were struggling to reconcile what seemed a contradiction in Albert Einstein’s theory of general relativity.

Published in 1915, and already widely accepted worldwide by physicists and mathematicians, the theory assumed the Universe was static – unchanging, unmoving and immutable. In short, Einstein believed the size and shape of the Universe today was, more or less, the same size and shape it had always been.

But when astronomers looked into the night sky at faraway galaxies with powerful telescopes, they saw hints the Universe was anything but that. These new observations suggested the opposite – that it was, instead, expanding.

Animation technique simulates the motion of squishy objects

The technique simulates elastic objects for animation and other applications, with improved reliability compared to other methods. In comparison, many existing simulation techniques can produce elastic animations that become erratic or sluggish or can even break down entirely.

To achieve this improvement, the MIT researchers uncovered a hidden mathematical structure in equations that capture how elastic materials deform on a computer. By leveraging this property, known as convexity, they designed a method that consistently produces accurate, physically faithful simulations.

‘Optical neural engine’ can solve partial differential equations

Partial differential equations (PDEs) are a class of mathematical problems that represent the interplay of multiple variables, and therefore have predictive power when it comes to complex physical systems. Solving these equations is a perpetual challenge, however, and current computational techniques for doing so are time-consuming and expensive.

Now, research from the University of Utah’s John and Marcia Price College of Engineering is showing a way to speed up this process: encoding those equations in light and feeding them into their newly designed “optical neural engine,” or ONE.

The researchers’ ONE combines diffractive optical neural networks and optical matrix multipliers. Rather than representing PDEs digitally, the researchers represented them optically, with variables represented by the various properties of a light wave, such as its intensity and phase. As a wave passes through the ONE’s series of optical components, those properties gradually shift and change, until they ultimately represent the solution to the given PDE.