Toggle light / dark theme

Artificial intelligence in various forms has been used in medicine for decades — but not like this. Experts predict that the adoption of large language models will reshape medicine. Some compare the potential impact with the decoding of the human genome, even the rise of the internet. The impact is expected to show up in doctor-patient interactions, physicians’ paperwork load, hospital and physician practice administration, medical research, and medical education.

Most of these effects are likely to be positive, increasing efficiency, reducing mistakes, easing the nationwide crunch in primary care, bringing data to bear more fully on decision-making, reducing administrative burdens, and creating space for longer, deeper person-to-person interactions.

This research note deploys data from a simulation experiment to illustrate the very real effects of monolithic views of technology potential on decision-making within the Homeland Security and Emergency Management field. Specifically, a population of national security decision-makers from across the United States participated in an experimental study that sought to examine their response to encounter different kinds of AI agency in a crisis situation. The results illustrate wariness of overstep and unwillingness to be assertive when AI tools are observed shaping key situational developments, something not apparent when AI is either absent or used as a limited aide to human analysis. These effects are mediated by levels of respondent training. Of great concern, however, these restraining effects disappear and the impact of education on driving professionals towards prudent outcomes is minimized for those individuals that profess to see AI as a full viable replacement of their professional practice. These findings constitute proof of a “Great Machine” problem within professional HSEM practice. Willingness to accept grand, singular assumptions about emerging technologies into operational decision-making clearly encourages ignorance of technological nuance. The result is a serious challenge for HSEM practice that requires more sophisticated solutions than simply raising awareness of AI.

Keywords: artificial intelligence; cybersecurity; experiments; decision-making.

This document intends to provide a summary of the cybersecurity threats in Japan with reference to globally observed cyber landscape. It looks at various kinds of cyberattacks their quantum and impact as well as specific verticals that are targeted by various threat actors.

As in February, 2024, in Japan, an organisation faces an average of 1,003 attacks per week, with FakeUpdates being the top malware. Most malicious files are delivered via email, and Remote Code Execution is the most common vulnerability exploit. In recent times, major Japanese incidents include a sophisticated malware by a nation state, attacks on Nissan and JAXA, and data breaches at the University of Tokyo and CASIO. Globally, incidents include Ukrainian media hacks, a ransomware attack on U.S. schools, and disruptions in U.S. healthcare due to cyber-attacks. The document also covers trends in malware types, attack vectors, and impacted industries over the last 6 months.

The details provide an overview of the threat landscape and major incidents in Japan and globally, highlighting the prevalence of attacks, common malware types, and impact on various industries and organisations. The information described should create awareness and help businesses and government organisation prepare well to safely operate in a digital environment.

As an initial step, we selected ARDs associated with hallmarks of aging. These included a total of 83 diseases linked to one or more hallmarks of aging, based on the taxonomy put forward in ref. 4 (Supplementary Table 2). Support for this taxonomy comes from multiple sources. Analyses of electronic health records from general practice and hospitalizations identified more than 200 diseases with incidence rates increasing with chronological age6,22. Researchers linked a subset of these ARDs to specific hallmarks of aging using several approaches: mining 1.85 million PubMed abstracts on human aging, identifying shared genes in the genome-wide association study catalog, conducting gene set enrichment analysis and analyzing disease co-occurrence networks within each hallmark4.

We confirmed the co-occurrence of ARDs within each hallmark in 492,257 participants from the UK Biobank study23. The presence of one ARD increased the risk of developing another ARD related to the same hallmark, with clustering coefficients ranging from 0.76 for LOP-specific ARDs to 0.92 for SCE-specific ARDs. These findings corroborated the hallmark-specific clustering of ARDs (Extended Data Figs. 3 and 4)23.

In time-to-event analyses of UK Biobank and FPS participants without these ARDs at baseline (n ranging from 477,325 to 492,294 in the UK Biobank and from 278,272 to 286,471 in the FPS, depending on the social disadvantage indicator and ARD), social disadvantage—indicated by education and adult SES (neighborhood deprivation)—was associated with a higher risk of developing ARDs. In the UK Biobank, the age-, sex-and ethnicity-adjusted hazard ratio for developing any ARD was 1.31 (95% confidence interval (CI) 1.29–1.33) for individuals with low compared with high education. For individuals with high versus low adult SES, the hazard ratio was 1.21 (95% CI 1.20–1.23). In the FPS, the corresponding hazard ratios were 1.28 (95% CI 1.25–1.31) and 1.23 (95% CI 1.20–1.27), respectively.

This simple animation shows the principle of Atomic Layer Deposition (ALD) using the molecules trimethyl aluminum (TMA) and water (H2O). At the end of this animation 1 monolayer (1 Angstrom ~ 10^−10 m)of Al2O3 is grown.

You may use this video for teaching /instructional purpose. We request you to please acknowledge the Banerjee Group at Washington University in St. Louis (http://research.engineering.wustl.edu/~parag.banerjee/) while using this video.

In this insightful conversation with OpenAI’s CPO Kevin Weil, we discuss the rapid acceleration of AI and its implications. Kevin makes the shocking prediction that coding will be fully automated THIS YEAR, not by 2027, as others suggest. He explains how OpenAI’s models are already ranking among the world’s top programmers and shares his thoughts on Deep Research, GPT-4.5’s human-like qualities, the future of jobs, and the timeline for GPT-5. Don’t miss Kevin’s billion-dollar startup idea and his vision for how AI will transform education and democratize software creation.

00:00 — Summary.
01:21 — Introduction.
03:20 — Discussion on OpenAI being both a research and product company.
11:05 — Timeline for GPT-5
11:38 — AI model commoditization and maintaining competitive advantage.
15:09 — Deep Research capabilities.
24:22 — Coding automation prediction: THIS YEAR
30:05 — AI in creative work and design.
36:43 — Future of programming and engineers.
38:32 — Will AI create new job categories?
40:58 — Billion-dollar AI startup ideas.
46:27 — Voice interfaces and robotics.
49:28 — Closing thoughts.

In this episode, Peter answers the hardest questions about AI, Longevity, and our future at an event in El Salvador (Padres y Hijos).

Recorded on February 2025
Views are my own thoughts; not Financial, Medical, or Legal Advice.

Chapters.

00:00 — Navigating Confusion in Leadership and Purpose.
02:00 — The Evolution of Work and Purpose.
03:50 — AI’s Role in Information Credibility.
07:17 — Sustainability and Technology’s Impact on Nature.
09:26 — Building a Future with AI and Longevity.
11:40 — The Economics of Longevity and Accessibility.
15:15 — Reimagining Education for the Future.
19:23 — Overcoming Human Obstacles to Progress.

I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: https://www.diamandis.com/subscribe.

Connect with Peter: