Toggle light / dark theme

Our lifespans might feel like a long time by human standards, but to the Earth it’s the blink of an eye. Even the entirety of human history represents a tiny slither of the vast chronology for our planet. We often think about geological time when looking back into the past, but today we look ahead. What might happen on our planet in the next billion years?

Written and presented by Prof David Kipping, edited by Jorge Casas.

→ Support our research program: https://www.coolworldslab.com/support.
→ Get Stash here! https://teespring.com/stores/cool-worlds-store.

THANK-YOU to our supporters D. Smith, M. Sloan, C. Bottaccini, D. Daughaday, A. Jones, S. Brownlee, N. Kildal, Z. Star, E. West, T. Zajonc, C. Wolfred, L. Skov, G. Benson, A. De Vaal, M. Elliott, B. Daniluk, M. Forbes, S. Vystoropskyi, S. Lee, Z. Danielson, C. Fitzgerald, C. Souter, M. Gillette, T. Jeffcoat, J. Rockett, D. Murphree, S. Hannum, T. Donkin, K. Myers, A. Schoen, K. Dabrowski, J. Black, R. Ramezankhani, J. Armstrong, K. Weber, S. Marks, L. Robinson, S. Roulier, B. Smith, G. Canterbury, J. Cassese, J. Kruger, S. Way, P. Finch, S. Applegate, L. Watson, E. Zahnle, N. Gebben, J. Bergman, E. Dessoi, J. Alexander, C. Macdonald, M. Hedlund, P. Kaup, C. Hays, W. Evans, D. Bansal, J. Curtin, J. Sturm, RAND Corp., M. Donovan, N. Corwin, M. Mangione, K. Howard, L. Deacon, G. Metts, G. Genova, R. Provost, B. Sigurjonsson, G. Fullwood, B. Walford, J. Boyd, N. De Haan, J. Gillmer, R. Williams, E. Garland, A. Leishman, A. Phan Le, R. Lovely, M. Spoto, A. Steele, M. Varenka, K. Yarbrough & F. Demopoulos.

I wonder if musicians should be worried.


Google Research introduces MusicLM, a model that can generate high-fidelity music from text descriptions. See how MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and how it outperforms previous systems in audio quality and text description adherence. Learn more about MusicCaps, a dataset composed of 5.5k music-text pairs, and see how MusicLM can be conditioned on both text and a melody. Check out this video to see the power of MusicLM: Generating Music From Text! #GoogleResearch #MusicLM #MusicGeneration.

▼ Link(s) From Today’s Video:

An impressive new AI system from Google can generate music in any genre given a text description. But the company, fearing the risks, has no immediate plans to release it.

Called MusicLM, Google’s certainly isn’t the first generative AI system for song. There have been other attempts, including Riffusion, an AI that composes music by visualizing it, as well as Dance Diffusion, Google’s own AudioML and OpenAI’s Jukebox. But owing to technical limitations and limited training data, none have been able to produce songs particularly complex in composition or high-fidelity.

MusicLM is perhaps the first that can.

AI is being used to generate everything from images to text to artificial proteins, and now another thing has been added to the list: speech. Last week researchers from Microsoft released a paper on a new AI called VALL-E that can accurately simulate anyone’s voice based on a sample just three seconds long. VALL-E isn’t the first speech simulator to be created, but it’s built in a different way than its predecessors—and could carry a greater risk for potential misuse.

Most existing text-to-speech models use waveforms (graphical representations of sound waves as they move through a medium over time) to create fake voices, tweaking characteristics like tone or pitch to approximate a given voice. VALL-E, though, takes a sample of someone’s voice and breaks it down into components called tokens, then uses those tokens to create new sounds based on the “rules” it already learned about this voice. If a voice is particularly deep, or a speaker pronounces their A’s in a nasal-y way, or they’re more monotone than average, these are all traits the AI would pick up on and be able to replicate.

The model is based on a technology called EnCodec by Meta, which was just released this part October. The tool uses a three-part system to compress audio to 10 times smaller than MP3s with no loss in quality; its creators meant for one of its uses to be improving the quality of voice and music on calls made over low-bandwidth connections.

Contents:
00:00 — Lunar Glass & Lunar Vehicle (Music: Lunar City)
05:01-HEXATRACK-Space Express Concept (Music: Constellation)
10:30 — Mars Glass, Dome City, and Martian Terra Forming (Music: Martian)
13:54 — Beyond — Proxima Centauri, Tau Cet e, TRAPPIST-I system, and beyond (Music: Neptune) HEXATRACK-Space Express Concept, designed and created by Yosuke A. Yamashiki, Kyoto University.
Lunar Glass & Mars Glass, designed and created by Takuya Ono, Kajima Co. Ltd.
Visual Effect and detailed design are generated by Juniya Okamura.
Concept Advisor Naoko Yamazaki, AstronautSIC Human Spaceology Center, GSAIS, Kyoto UniversityVR of Lunar&Mars Glass — created by Natsumi Iwato and Mamiko Hikita, Kyoto University.
VR contents of Lunar&Mars Glass by Shinji Asano, Natsumi Iwato, Mamiko Hikita and Junya Okamura.
Daidaros concept by Takuya Ono.
Terraformed Mars were designed by Fuka Takagi & Yosuke A. Yamashiki.
Exoplanet image were created by Ryusuke Kuroki, Fuka Takagi, Hiroaki Sato, Ayu Shiragashi and Y. A. Yamashiki.
All Music (” Lunar City” “Constellation”“Martian”“Neptune”) are composed and played by Yosuke Alexandre Yamashiki.

Summary: Combining neuroimaging and EEG data, researchers recorded the neural activity of people while listening to a piece of music. Using machine learning technology, the data was translated to reconstruct and identify the specific piece of music the test subjects were listening to.

Source: University of Essex.

A new technique for monitoring brain waves can identify the music someone is listening to.