AI Research Visualization
Back to Home
AI TalentCase Study

Who Benefits from American AI Research in China? The ResNet Case Study

The most influential deep learning architecture of the past decade was developed by Chinese-born researchers at a US institution. What does ResNet tell us about the real dynamics of US-China AI competition?

January 15, 2025
12 min read

In 2015, a team of four researchers at Microsoft Research Asia published a paper titled "Deep Residual Learning for Image Recognition." That paper introduced ResNet — and fundamentally changed the trajectory of artificial intelligence research.

ResNet's innovation was deceptively simple: skip connections that allowed neural networks to grow dramatically deeper without degrading performance. Before ResNet, networks with more than 20 layers often performed worse than shallower alternatives. ResNet enabled architectures with 152 layers — and eventually thousands. The breakthrough unlocked the era of truly deep learning.

The Researchers Behind the Breakthrough

The four authors — Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun — were all Chinese-born researchers working at Microsoft Research Asia in Beijing. All four had received their undergraduate education in China. Three of the four subsequently moved to US institutions.

Kaiming He is now at MIT's CSAIL. Shaoqing Ren is at a US tech company. Jian Sun returned to China to found Megvii, one of China's most prominent AI companies. The trajectories of these four researchers encapsulate the complexity of US-China AI talent dynamics.

ResNet's Impact by the Numbers

180,000+
Citations (2024)
2016
ImageNet Challenge Winner
152
Layers in Original ResNet
4
Chinese-born Authors

The Policy Paradox

ResNet presents a challenge for policymakers who frame US-China AI competition as a zero-sum game. The research was conducted in China, at a Chinese lab, by Chinese-born researchers — yet it was funded by an American company and published openly, advancing the global state of the art.

Had visa restrictions prevented these researchers from later working at US institutions, would America be better off? The counterfactual is impossible to prove, but the question illuminates the complexity of talent-based technology competition.

"The single most important paper in deep learning of the past decade was written by four Chinese researchers at a lab in Beijing. That same team now spans institutions in both countries. The talent flow isn't zero-sum — it's a river that flows in multiple directions."

What ResNet Tells Us About AI Talent

The ResNet case study reveals several patterns that recur throughout our AI talent research:

  • Foundational training matters: All four authors received their undergraduate education in China, at top Chinese universities. The pipeline begins long before researchers choose where to work.
  • Location is fluid: Researchers move between countries throughout their careers. Policies focused on a single moment of restriction miss this dynamism.
  • Open research benefits everyone:ResNet's publication advanced AI capabilities globally, including at American companies and universities.
  • Commercial applications diverge: While the research was open, its commercial applications have developed differently in each country — from autonomous vehicles to facial recognition.

Implications for AI Content Generation

ResNet's architecture became foundational not just for image recognition but for image generation. The same skip connections that enabled deeper classification networks now power the generators in diffusion models and GANs. Understanding who develops these foundational technologies matters for AI governance.

The researchers who build the foundations of generative AI — and where they work — shapes how these technologies develop, what safety measures are implemented, and which governance frameworks they encounter. ResNet is a reminder that AI's global talent dynamics have consequences that extend far beyond academic citation counts.

Policy Recommendations

The ResNet case study suggests several principles for talent-focused AI policy:

  1. Think in flows, not stocks:Talent moves throughout careers. Policies should focus on making the US an attractive destination throughout researchers' careers, not just at the moment of initial entry.
  2. Distinguish research from application: Open research publication benefits global knowledge; commercial application and deployment are where competition actually occurs.
  3. Invest in the full pipeline: If foundational training happens elsewhere, the US will always be recruiting mid-career. Strengthening domestic STEM education extends the pipeline.
  4. Recognize interconnection: The most impactful AI research often involves international collaboration. Policies that sever connections may reduce total output more than they protect competitive advantage.