Toggle light / dark theme

Schmidt thinks that if the AI sector doesn’t create protections, politicians will have to step in.

Eric Schmidt, the former CEO of Google, has spoken out against the six-month ban on AI development that some tech celebrities and business executives demanded earlier.

“I’m not in favor of a six-month pause, because it will simply benefit China,” said Schmidt, Google’s first CEO.


Wikimedia Commons.

The last few weeks have been abuzz with news and fears (well, largely fears) about the impact chatGPT and other generative technologies might have on the workplace. Goldman Sachs predicted 300 million jobs would be lost, while the likes of Steve Wozniak and Elon Musk asked for AI development to be paused (although pointedly not the development of autonomous driving).

Indeed, OpenAI chief Sam Altman recently declared that he was “a little bit scared”, with the sentiment shared by OpenAI’s chief scientist Ilya Sutskever, who recently said that “at some point it will be quite easy, if one wanted, to cause a great deal of harm”.


As fears mount about the jobs supposedly at risk from generative AI technologies like chatGPT, are these fears likely to prevent people from taking steps to adapt?

Do you like our content? Please support PRO Robots on Patreon.

https://www.patreon.com/PRORobots.

Your contributions will help us to create better content and to improve our service for you and our PRO Robots community. Every dollar counts and will help us keep working for you.
Thank you for your support!

👉For business inquiries: [email protected].
✅ Instagram: https://www.instagram.com/pro_robots.

The new universal humanoid robot, the boom of liquid robots that want to run inside a person, all the latest news about the artificial intelligence GPT, which has become a real newsmaker of the week, as well as other high-tech news in one video! Here we go!

#prorobots #robots #robot #futuretechnologies #robotics.

More interesting and useful content:
✅ Elon Musk Innovation https://www.youtube.com/playlist?list=PLcyYMmVvkTuQ-8LO6CwGWbSCpWI2jJqCQ
✅Future Technologies Reviews https://www.youtube.com/playlist?list=PLcyYMmVvkTuTgL98RdT8-z-9a2CGeoBQF
✅ Technology news.
https://www.facebook.com/PRO.Robots.Info.

#prorobots #technology #roboticsnews.

The idea of a Tesla makes sense now that the company has started manufacturing and delivering its heavy-duty battery electric truck: the Semi. The Tesla Semi’s availability on the market hints that Tesla has solved certain constraints—including battery constraint issues—that prevented it from delivering heavy-duty vehicles.

In its Master Plan Part 3, Tesla notes that its electric bus will use 300 kWh battery packs using LFP cells.

Tesla’s Master Plan Part 3 teases a new direction for the company with its next-generation platform. Tesla’s new adventure will likely feature some challenging obstacles and more meme-worthy moments from Elon Musk. And more fun for Tesla investors and supporters.

After almost two years of waiting for Elon Musk’s Mars rocket to fly again, things are really starting to move quickly now, it seems.

The Super Heavy first stage booster section of Starship was moved to the launch site over the weekend and now the Federal Aviation Administration lists Monday, April 10 as the target launch date for Starship in its current Operations Plan Advisory for air traffic controllers.

The advisory also lists next Tuesday and Wednesday as potential backup launch dates.

Won’t that just make enemies of AI?


One of the world’s loudest artificial intelligence critics has issued a stark call to not only put a pause on AI but to militantly put an end to it — before it ends us instead.

In an op-ed for Time magazine, machine learning researcher Eliezer Yudkowsky, who has for more than two decades been warning about the dystopian future that will come when we achieve Artificial General Intelligence (AGI), is once again ringing the alarm bells.

Yudkowsky said that while he lauds the signatories of the Future of Life Institute’s recent open letter — which include SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak, and onetime presidential candidate Andrew Yang — calling for a six-month pause on AI advancement to take stock, he himself didn’t sign it because it doesn’t go far enough.

10 SpaceX Starships are carrying 120 robots to Mars. They are the first to colonize the Red Planet. Building robot habitats to protect themselves, and then landing pads, structures, and the life support systems for the humans who will soon arrive.

This Mars colonization mini documentary also covers they type of robots that will be building on Mars, the solar fields, how Elon Musk and Tesla could have a battery bank station at the Mars colony, and how the Martian colony expands during the 2 years when the robots are building. Known as the Robotic Age of Mars.

Additional footage from: SpaceX, NASA/JPL/University of Arizona, ICON, HASSEL, Tesla, Lockhead Martin.

A building on Mars sci-fi documentary, and a timelapse look into the future.

A recent open letter signed by tech giants, including Elon Musk, has called for a halt in AI development, citing “profound risks to society and humanity.” But could this pause lead to a more dangerous outcome? The AI landscape resembles the classic Prisoner’s Dilemma, where cooperation yields the best results, but betrayal tempts players to seek personal gain.

If OpenAI pauses work on ChatGPT, will others follow, or will they capitalize on the opportunity to surpass OpenAI? This is particularly worrisome given the strategic importance of AI in global affairs and the potential for less transparent actors to monopolize AI advancements.

Instead of halting development, OpenAI should continue its work while advocating for responsible and ethical AI practices. By acting as a role model, implementing safety measures, and collaborating with the global AI community to establish ethical guidelines, OpenAI can help ensure that AI technology benefits humanity rather than becoming a tool for exploitation and harm.