Toggle light / dark theme

TSMC 2nm Update: Two Fabs in Construction, One Awaiting Government Approval

When Taiwan Semiconductor Manufacturing Co. (TSMC) is prepping to roll out an all-new process technology, it usually builds a new fab to meet demand of its alpha customers and then either adds capacity by upgrading existing fabs or building another facility. With N2 (2nm-class), the company seems to be taking a slightly different approach as it is already constructing two N2-capable fabs and is awaiting for a government approval for the third one.

We are also preparing our N2 volume production starting in 2025,” said Mark Liu, TSMC’s outgoing chairman, at the company’s earnings call with financial analysts and investors. “We plan to build multiple fabs or multiple phases of 2nm technologies in both Hsinchu and Kaohsiung science parks to support the strong structural demand from our customers. […] “In the Taichung Science Park, the government approval process is ongoing and is also on track.”

TSMC is gearing up to construct two fabrication plants capable of producing N2 chips in Taiwan. The first fab is planned to be located near Baoshan in Hsinchu County, neighboring its R1 research and development center, which was specifically build to develop N2 technology and its successor. This facility is expected to commence high-volume manufacturing (HVM) of 2nm chips in the latter half of 2025. The second N2-capable fabrication plant by is to be located in the Kaohsiung Science Park, part of the Southern Taiwan Science Park near Kaohsiung. The initiation of HVM at this plant is projected to be slightly later, likely around 2026.

OpenAI’s policy update signals for the future of AI and military

From blanket bans to specific prohibitions

Previously, OpenAI had a strict ban on using its technology for any “activity that has high risk of physical harm, including” “weapons development” and “military and warfare.” This would prevent any government or military agency from using OpenAI’s services for defense or security purposes. However, the new policy has removed the general ban on “military and warfare” use. Instead, it has listed some specific examples of prohibited use cases, such as “develop or use weapons” or “harm yourself or others.”

Biomedical Research and Longevity Society, Inc.

(BRLS), formerly known as Life Extension Foundation, Inc., is one of the world’s leading providers of financial support for otherwise unfunded research in the areas of cryobiology, interventive gerontology and cryonics. During the last decade alone, BRLS awarded more than $100 million in grants to highly-specialized cryogenic research organizations.

BRLS is exempt from taxation under Internal Revenue Service code Section 501©(4)1, and is operated exclusively to promote social welfare through scientific research and education. BRLS was founded in 1977, and since then, we have awarded hundreds of grants to scientists throughout the United States who are personally committed to our mission. These dedicated professionals take extraordinary steps to make their research as cost-effective as possible. We are careful to commit our research dollars to projects that are difficult or impossible to fund through government and institutional grants or other sources.

New report identifies types of cyberattacks that manipulate behavior of AI systems

Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction—and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication.

Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice. The publication, a collaboration among government, academia, and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them—with the understanding that there is no silver bullet.

“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”

AI discovers that not every fingerprint is unique

It’s a well-accepted fact in the forensics community that fingerprints of different fingers of the same person— intra-person fingerprints—are unique and, therefore, unmatchable.

A team led by Columbia Engineering undergraduate senior Gabe Guo challenged this widely held presumption. Guo, who had no prior knowledge of forensics, found a public U.S. government database of some 60,000 fingerprints and fed them in pairs into an artificial intelligence-based system known as a deep contrastive network. Sometimes the pairs belonged to the same person (but different fingers), and sometimes they belonged to different people.

Amazon, Microsoft and Google are opening Saudi Arabia HQ’s

There was a flurry of activity towards the end of the year as large corporations look to establish local HQs. Other firms that have recently received such licenses are Airbus SE, Oracle Corp. and Pfizer Inc.

Saudi Arabia announced the new rules for state contracts in February 2021, saying it wanted to limit ‘economic leakage’ — a term used by the government for state spending that can benefit firms that don’t have a substantial presence in the country.

A key part of Crown Prince Mohammed bin Salman’s economic agenda has been to limit some of the billions in spending by the government and Saudi citizens that leave the country each year. Government officials want to stop giving contracts to international firms who only fly executives in and out of the kingdom.

Mysterious crypto ‘dark money’ group ramps up lobbying efforts ahead of 2024 election

A new mysterious nonprofit group backed by the crypto industry has set up a mailing address about 100 miles away from Washington, D.C., and is making moves to exert power in the nation’s capital.

The Cedar Innovation Foundation, a 501©(4) that was incorporated in Delaware in April, has launched advertisements against at least one powerful lawmaker who’s up for reelection, and quietly hired a group of strategists to fight on its behalf, according to records uncovered by CNBC.

It’s part of a broader effort by the crypto industry to influence Congress ahead of the 2024 elections and as a variety of crypto-related bills begin to weave their way through Washington.

Nanostructured flat lens uses machine learning to ‘see’ more clearly, while using less power

From surveillance to defense to AI/ML virtualization, and it’s more compact and energy efficient. Oh and let’s not forget the medical imaging applications. I just wonder how long until it’s put into effect.


A front-end lens, or meta-imager, created at Vanderbilt University can potentially replace traditional imaging optics in machine-vision applications, producing images at higher speed and using less power.

The nanostructuring of lens material into a meta-imager filter reduces the typically thick optical lens and enables front-end processing that encodes information more efficiently. The imagers are designed to work in concert with a digital backend to offload computationally expensive operations into high-speed and low-power optics. The images that are produced have potentially wide applications in , , and government and defense industries.

Mechanical engineering professor Jason Valentine, deputy director of the Vanderbilt Institute of Nanoscale Science and Engineering, and colleagues’ proof-of-concept meta-imager is described in a paper published in Nature Nanotechnology.

Why Nationalizing AI Is a Bad Idea

Here’s my new Opinion article for Newsweek on AI!


Like so many in America, I watch astounded as generative artificial intelligence (AI) evolved at lighting speed in 2023, performing tasks that seemed unimaginable just a few years ago. Just last month, a survey found that nearly 40 percent of more than 900 companies were planning to cut jobs in 2024 in part because of AI. If robotics takes a giant leap in the next 12 months, as some suspect, then the survey might end up being too conservative. Generative AI combined with humanoids, which many companies are racing to turn out, is a game changer. Construction jobs, physician jobs, police jobs, and many more will soon be at stake.

Clearly, capitalism is facing a crisis. For years, I have advocated for a Universal Basic Income (UBI), as a way to transition society into the AI age. My method was by leasing out the trillions of dollars worth of empty U.S. federal land to big business, and using some of the proceeds to pay for a basic income for every American. However, any method of a basic income will now help offset the loss of jobs AI will bring.

But recently, chatter about something else is being thrown around in internet chat rooms, in congressional halls, and in arguments at holiday dinner tables: nationalizing AI.

It’s a bad idea. For starters, I don’t want big government in the innovation business; it already has a hard enough time trying to keep people out of poverty. Right now, 1 in 5 kids in the U.S. is going to bed hungry or malnourished at night, and America’s homeless problem is the worst it’s been in my 50 year lifetime.