Toggle light / dark theme

AI and biophysics unite to forecast high-risk viral variants before outbreaks

When the first reports of a new COVID-19 variant emerge, scientists worldwide scramble to answer a critical question: Will this new strain be more contagious or more severe than its predecessors? By the time answers arrive, it’s frequently too late to inform immediate public policy decisions or adjust vaccine strategies, costing public health officials valuable time, effort, and resources.

In a pair of recent publications in Proceedings of the National Academy of Sciences, a research team in the Department of Chemistry and Chemical Biology combined biophysics with artificial intelligence to identify high-risk viral variants in record time—offering a transformative approach for handling pandemics. Their goal: to get ahead of a virus by forecasting its evolutionary leaps before it threatens public health.

“As a society, we are often very unprepared for the emergence of new viruses and pandemics, so our lab has been working on ways to be more proactive,” said senior author Eugene Shakhnovich, Roy G. Gordon Professor of Chemistry. “We used fundamental principles of physics and chemistry to develop a multiscale model to predict the course of evolution of a particular variant and to predict which variants will become dominant in populations.”

OpenAI co-founder Sutskever sets up new AI company devoted to ‘safe superintelligence’

(AP) — Ilya Sutskever, one of the founders of OpenAI who was involved in a failed effort to push out CEO Sam Altman, said he’s starting a safety-focused artificial intelligence company.

Sutskever, a respected AI researcher who left the ChatGPT maker last month, said in a social media post Wednesday that he’s created Safe Superintelligence Inc. with two co-founders. The company’s only goal and focus is safely developing “superintelligence” — a reference to AI systems that are smarter than humans.

The company vowed not to be distracted by “management overhead or product cycles,” and under its business model, work on safety and security would be “insulated from short-term commercial pressures,” Sutskever and his co-founders Daniel Gross and Daniel Levy said in a prepared statement.

AI helps discover optimal new material for removing radioactive iodine contamination

Managing radioactive waste is one of the core challenges in the use of nuclear energy. In particular, radioactive iodine poses serious environmental and health risks due to its long half-life (15.7 million years in the case of I-129), high mobility, and toxicity to living organisms.

A Korean research team has successfully used artificial intelligence to discover a new material that can remove iodine for nuclear environmental remediation. The team plans to push forward with commercialization through various industry–academia collaborations, from iodine-adsorbing powders to contaminated water treatment filters.

Professor Ho Jin Ryu’s research team from the Department of Nuclear and Quantum Engineering, in collaboration with Dr. Juhwan Noh of the Digital Chemistry Research Center at the Korea Research Institute of Chemical Technology, developed a technique using AI to discover new materials that effectively remove contaminants. Their research is published in the Journal of Hazardous Materials.

Senate Votes to Allow State A.I. Laws, a Blow to Tech Companies

There are no federal laws regulating A.I. but states have enacted dozens of laws that strengthen consumer privacy, ban A.I.-generated child sexual abuse material and outlaw deepfake videos of political candidates. All but a handful of states have some laws regulating artificial intelligence in place. It is an area of deep interest: All 50 have introduced bills in the past year tied to the issue.

The Senate’s provision, introduced in the Senate by Senator Ted Cruz, Republican of Texas, sparked intense criticism by state attorneys general, child safety groups and consumer advocates who warned the amendment would give A.I. companies a clear runway to develop unproven and potentially dangerous technologies.

Could Google’s Veo 3 be the start of playable world models?

Demis Hassabis, CEO of Google’s AI research organization DeepMind, appeared to suggest Tuesday evening that Veo 3, Google’s latest video-generating model, could potentially be used for video games.

In response to a post on X beseeching Google to “Let me play a video game of my veo 3 videos already,” and asking, “playable world models wen?” Hassabis responded, “now wouldn’t that be something.”

On Wednesday morning, Logan Kilpatrick, lead product for Google’s AI Studio and Gemini API, chimed in with a reply: “🤐🤐🤐🤐”

Silicon Valley investor Vinod Khosla predicts AI will replace 80% of jobs by 2030—and take much of the Fortune 500 with it

Tech entrepreneur and investor Vinod Khosla’s prediction of AI automating 80% of high-value jobs by 2030 coincides with a reckoning for Fortune 500 companies.

A ‘Sputnik’ moment in the global AI race

When Chinese AI startup DeepSeek unveiled the open-source large language model DeepSeek-R1 in January, many referred to it as the “AI Sputnik shock” — a reference to the monumental significance of the Soviet Union’s 1957 launch of the first satellite into orbit.

Much remains uncertain about DeepSeek’s LLM and its capabilities should not be overestimated — but its release nevertheless has sparked intense discussion about its superiority especially in terms of cost. DeepSeek claims that its model possesses reasoning abilities on par with or even superior to OpenAI’s leading models, with training costs at less than one-tenth of OpenAI’s — reportedly just $5.6 million — largely due to the use of NVIDIA’s lower-cost H800 GPUs rather than the more powerful H200 or H100 models.

Tech giants like Meta and Google have spent billions of dollars on high-performance GPUs to develop cutting-edge AI models. However, DeepSeek’s ability to produce a high-performance AI model at a significantly lower cost challenges the prevailing belief that computational power—determined by the number and quality of GPUs—is the primary driver of AI performance.