Toggle light / dark theme

SFIA Monthly Livestream: Sunday, January 29, 2023 4pm EST

Our monthly livestream Q&A session. Join us on January 29, 2023, at 4pm EST, Sunday, and get your questions about the channel and episodes on chat to be answered!

Support the stream: https://streamlabs.com/isaacarthur.
Catch the audio-only show on Soundcloud: https://soundcloud.com/isaac-arthur-148927746
Visit our Website: https://www.isaacarthur.net.
SFIA Discord Server: https://discord.gg/53GAShE
Support us on Patreon: https://www.patreon.com/IsaacArthur.
Facebook Group: https://www.facebook.com/groups/1583992725237264/
Reddit: https://www.reddit.com/r/IsaacArthur/
Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.
Music Courtesy of Epidemic Sound http://epidemicsound.com/creator

28-Year-Old Scams JP Morgan for $175 Million

Do we have a new Elizabeth Holmes?

To be fair, it seems JP Morgan was only able to check the emails after they acquired the platform since they were concerned about breaching data privacy prior to becoming its new legal caretakers.

Then again, it might be because “old companies” are having difficulty identifying new scams.


Uplevel your news reading experience with Ground News using our link: https://ground.news/wsm.

In 2021 JP Morgan acquired a student aid website called withfrank.org for $175 million. Now the bank is accusing Frank of being a massive fraud. So what is going on?

Beyond Human: A Billion Years of Evolution and the Fate of Our Species

Our lifespans might feel like a long time by human standards, but to the Earth it’s the blink of an eye. Even the entirety of human history represents a tiny slither of the vast chronology for our planet. We often think about geological time when looking back into the past, but today we look ahead. What might happen on our planet in the next billion years?

Written and presented by Prof David Kipping, edited by Jorge Casas.

→ Support our research program: https://www.coolworldslab.com/support.
→ Get Stash here! https://teespring.com/stores/cool-worlds-store.

THANK-YOU to our supporters D. Smith, M. Sloan, C. Bottaccini, D. Daughaday, A. Jones, S. Brownlee, N. Kildal, Z. Star, E. West, T. Zajonc, C. Wolfred, L. Skov, G. Benson, A. De Vaal, M. Elliott, B. Daniluk, M. Forbes, S. Vystoropskyi, S. Lee, Z. Danielson, C. Fitzgerald, C. Souter, M. Gillette, T. Jeffcoat, J. Rockett, D. Murphree, S. Hannum, T. Donkin, K. Myers, A. Schoen, K. Dabrowski, J. Black, R. Ramezankhani, J. Armstrong, K. Weber, S. Marks, L. Robinson, S. Roulier, B. Smith, G. Canterbury, J. Cassese, J. Kruger, S. Way, P. Finch, S. Applegate, L. Watson, E. Zahnle, N. Gebben, J. Bergman, E. Dessoi, J. Alexander, C. Macdonald, M. Hedlund, P. Kaup, C. Hays, W. Evans, D. Bansal, J. Curtin, J. Sturm, RAND Corp., M. Donovan, N. Corwin, M. Mangione, K. Howard, L. Deacon, G. Metts, G. Genova, R. Provost, B. Sigurjonsson, G. Fullwood, B. Walford, J. Boyd, N. De Haan, J. Gillmer, R. Williams, E. Garland, A. Leishman, A. Phan Le, R. Lovely, M. Spoto, A. Steele, M. Varenka, K. Yarbrough & F. Demopoulos.

::Music::
Music licensed by SoundStripe.com (SS)[shorturl.at/ptBHI], Artlist.io, via Creative Commons (CC) Attribution License (https://creativecommons.org/licenses/by/4.0/), or with permission from the artist.
► 00:00 Hill — All Flesh Is as the Grass [https://open.spotify.com/track/1WuMK4qy9tUSGMINoEClxL?si=5635838259b34fa4]
► 03:56 Hill — The Great Alchemist [https://open.spotify.com/track/3PAx36jIsKiQMT9CQsRk4G?si=035fc819505445a1]
► 07:50 Outside the Sky — Trillions.
► 11:41 Hill — We Are Unceasing Beings [https://open.spotify.com/track/3TnhawPMycRrPuTnKzNGNN?si=bddf4e61177d48c4]
► 14:57 Indive — Halo Drive.

::Chapters::

My Mind was Blown. AI Music is INSANE! — Google’s NEW MusicLM AI

I wonder if musicians should be worried.


Google Research introduces MusicLM, a model that can generate high-fidelity music from text descriptions. See how MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and how it outperforms previous systems in audio quality and text description adherence. Learn more about MusicCaps, a dataset composed of 5.5k music-text pairs, and see how MusicLM can be conditioned on both text and a melody. Check out this video to see the power of MusicLM: Generating Music From Text! #GoogleResearch #MusicLM #MusicGeneration.

▼ Link(s) From Today’s Video:

✩ MusicLM: https://google-research.github.io/seanet/musiclm/examples/

► MattVidPro Website: https://MattVidPro.com.

Google created an AI that can generate music from text descriptions, but won’t release it

An impressive new AI system from Google can generate music in any genre given a text description. But the company, fearing the risks, has no immediate plans to release it.

Called MusicLM, Google’s certainly isn’t the first generative AI system for song. There have been other attempts, including Riffusion, an AI that composes music by visualizing it, as well as Dance Diffusion, Google’s own AudioML and OpenAI’s Jukebox. But owing to technical limitations and limited training data, none have been able to produce songs particularly complex in composition or high-fidelity.

MusicLM is perhaps the first that can.

Microsoft’s New AI Can Clone Your Voice in Just 3 Seconds

AI is being used to generate everything from images to text to artificial proteins, and now another thing has been added to the list: speech. Last week researchers from Microsoft released a paper on a new AI called VALL-E that can accurately simulate anyone’s voice based on a sample just three seconds long. VALL-E isn’t the first speech simulator to be created, but it’s built in a different way than its predecessors—and could carry a greater risk for potential misuse.

Most existing text-to-speech models use waveforms (graphical representations of sound waves as they move through a medium over time) to create fake voices, tweaking characteristics like tone or pitch to approximate a given voice. VALL-E, though, takes a sample of someone’s voice and breaks it down into components called tokens, then uses those tokens to create new sounds based on the “rules” it already learned about this voice. If a voice is particularly deep, or a speaker pronounces their A’s in a nasal-y way, or they’re more monotone than average, these are all traits the AI would pick up on and be able to replicate.

The model is based on a technology called EnCodec by Meta, which was just released this part October. The tool uses a three-part system to compress audio to 10 times smaller than MP3s with no loss in quality; its creators meant for one of its uses to be improving the quality of voice and music on calls made over low-bandwidth connections.

/* */