Friday, May 31, 2024

"Where Does China Stand in the AI Wave?"

From ChinaTalk, May 10:

China’s top policy experts discuss the US-China gap, open vs. closed, and societal implications 

This piece was authored by “Bit Wise,” an anonymous ChinaTalk contributor focusing on China’s AI policy.

Debates and discussions by Western public intellectuals on AI governance are closely followed in China. Whenever prominent figures like Sam Altman, Joshua Bengio, or Stuart Russell give interviews, multiple Chinese media outlets swiftly translate and analyze their remarks.

English-speaking audiences, however, seldom engage with the AI governance perspectives offered by Chinese public intellectuals.

In this article, ChinaTalk presents the highlights and a full translation of a panel discussion on AI (archived here) that took place six weeks ago in Beijing. Hosted by the non-profit organization “The Intellectual” 知识分子 — whose public WeChat account serves as a platform for discussions on scientific issues and their governance implications — the panelists delved into a wide range of topics, including:

  • the state of China’s AI industry, discussing the biggest bottlenecks, potential advantages in AI applications, and the role of the government in supporting domestic AI development;

  • the technical aspects of AI, such as whether Sora understands physics, the reliance on the Transformer architecture, and how far we are from true AGI;

  • and the societal implications — which jobs will be replaced by AI first, whether open- or closed-source is better for AI safety, and if AI developers should dedicate more resources to AI safety.

The panelists are all real heavyweights
They all attended the Second International Dialogue on AI Safety in Beijing (also in March 2024), where they engaged with prominent Western AI scientists and drafted a Consensus Statement on Red Lines in Artificial Intelligence.

Xue Lan 薛澜 is China’s ultimate tech-policy guru. With a background in engineering and public policy, he frequently advises the government and serves as a Counselor of the State Council. Xue is the director of Tsinghua University’s Institute for AI International Governance, Dean of Schwarzman College, and one of seven members on the National New Generation AI Governance Expert Committee.

Zhang Hongjiang 张宏江 is a founding chairman of the Beijing Academy of AI (BAAI), one of China’s most advanced AI research institutes. Trained in multimedia computing, he joined HP Labs in Silicon Valley in 1995. Four years later, he returned to China and participated in the establishment of Microsoft’s China business, where he later became a Chief Technology Officer (CTO). In the early 2010s, he founded his own company. In 2017, he joined Source Code Capital, a Beijing-based VC firm founded by a former vice president of Sequoia Capital China, which has made notable investments in Bytedance, Meituan 美团, and Li Auto 理想, among others.

Li Hang 李航 is head of research at Bytedance, where he leads both basic research and product development across fields like search, recommendation, chat, and more. He has decades of experience in China’s AI industry, having previously worked at Microsoft Research Asia and Huawei’s Noah’s Ark Lab. He studied in Japan in the 1990s and worked at NEC Corporation.

Highlights

Open source
Whether frontier models should be open-sourced has become a major point of disagreement in AI safety debates globally. These debates will have direct legislative influence. For example, a draft expert proposal for China’s AI law published in March 2024 would exempt some open-source models from many legal requirements. (Note that this is not an “official” draft law yet — it’s just an informal expert proposal.)

Xue Lan: Open source should be encouraged. Large companies can find profit models at the commercial application and product layer, and compete on that layer. Judging from the practice in various research fields, this model — open source in the basic research stage, closed source in the productization stage — seems to be a very effective approach in promoting human progress. …

The discussion between open and closed source actually involves weighing the pros and cons. Some people may worry that open source will give extremist groups or individuals the opportunity to misuse the technology. Regardless of open source or closed source, as long as you intend to do evil, you will always find a way. Various technologies we have today may cause damage to human society if abused. Biotechnology is a good example: it also has the risk of misuse. Therefore, what is more important is how to establish a system to prevent and stop any individual or organization from abusing technology to harm society.

Li Hang: I strongly agree with a view expressed by Harry Shum at an AI conference last year: whether or not to open source depends on a company’s business strategy and position. … The best company will definitely not open source. The second-best will not open source if it wants to compete with the first place. The third and fourth might choose to open source to gain some competitive advantage. … Among AI companies, OpenAI and Anthropic are currently not open source. Meta and Amazon are open source.

Zhang Hongjiang: In both open and closed source, security and safety issues are always inevitable. But with open-source models, it’s easier for us to verify and review. In the future, any AI model released should pass safety certifications.

AI safety funding

Zhang Hongjiang:
At a closed-door AI meeting, I once heard a point of view that really surprised me, but I believe that the data is correct: 95% of R&D expenses for nuclear powerplant equipment go into safety. This is a revelation for the AI field. Should we also invest more resources in AI safety? If 95% of nuclear-power R&D is invested in safety, shouldn’t AI also invest 10% or 15%, because this technology may also lead to human extinction?
The future of AI: large vs. small models....

....MUCH MORE