Toggle light / dark theme

10 SpaceX Starships are carrying 120 robots to Mars. They are the first to colonize the Red Planet. Building robot habitats to protect themselves, and then landing pads, structures, and the life support systems for the humans who will soon arrive.

This Mars colonization mini documentary also covers they type of robots that will be building on Mars, the solar fields, how Elon Musk and Tesla could have a battery bank station at the Mars colony, and how the Martian colony expands during the 2 years when the robots are building. Known as the Robotic Age of Mars.

Additional footage from: SpaceX, NASA/JPL/University of Arizona, ICON, HASSEL, Tesla, Lockhead Martin.

A building on Mars sci-fi documentary, and a timelapse look into the future.
See more of Venture City at my website: https://vx-c.com.

_______
Books.

• The Martian book showcases the science, math, and physics of living on the red planet — told through the story of someone who has to survive there.

A recent open letter signed by tech giants, including Elon Musk, has called for a halt in AI development, citing “profound risks to society and humanity.” But could this pause lead to a more dangerous outcome? The AI landscape resembles the classic Prisoner’s Dilemma, where cooperation yields the best results, but betrayal tempts players to seek personal gain.

If OpenAI pauses work on ChatGPT, will others follow, or will they capitalize on the opportunity to surpass OpenAI? This is particularly worrisome given the strategic importance of AI in global affairs and the potential for less transparent actors to monopolize AI advancements.

Instead of halting development, OpenAI should continue its work while advocating for responsible and ethical AI practices. By acting as a role model, implementing safety measures, and collaborating with the global AI community to establish ethical guidelines, OpenAI can help ensure that AI technology benefits humanity rather than becoming a tool for exploitation and harm.

The non-profit said powerful AI systems should only be developed “once we are confident that their effects will be positive and their risks will be manageable.” It cited potential risks to humanity and society, including the spread of misinformation and widespread automation of jobs.

The letter urged AI companies to create and implement a set of shared safety protocols for AI development, which would be overseen by independent experts.

Apple cofounder Steve Wozniak, Stability AI CEO Emad Mostaque, researchers at Alphabet’s AI lab DeepMind, and notable AI professors have also signed the letter. At the time of publication, OpenAI CEO Sam Altman had not added his signature.

Do you really want to live forever? Futurist Ray Kurzweil has predicted that humans will achieve immortality in just seven years. Genetic engineering company touts ‘Jurassic Park’-like plan to ‘de-extinct’ dodo bird Elon Musk ‘comfortable’ putting Neuralink chip into one of his kids.

Read more ❯.

Year 2022 😗😁


The Climate Foundation’s SeaForestation project has won a Milestone XPRIZE for carbon removal, from Elon Musk’s foundation.

According to the prize’s official site, the competition “is aimed at tackling the biggest threat facing humanity — fighting climate change and rebalancing Earth’s carbon cycle”.

Funded by Elon Musk and the Musk Foundation, this $100 million competition claims to be the largest incentive prize in history.

GPT-4 is reportedly six times larger than GPT-3, according to a media report, and Elon Musk’s exit from OpenAI has cleared the way for Microsoft.

The US website Semafor, citing eight anonymous sources familiar with the matter, reports that OpenAI’s new GPT-4 language model has one trillion parameters. Its predecessor, GPT-3, has 175 billion parameters.

Semafor previously revealed Microsoft’s $10 billion investment in OpenAI and the integration of GPT-4 into Bing in January and February, respectively, before the official announcement.