Last week, I went on the CBC News podcast “Nothing Is Foreign” to talk about the draft regulation—and what it means for the Chinese government to take such quick action on a still-very-new technology. 

As I said in the podcast, I see the draft regulation as a mixture of sensible restrictions on AI risks and a continuation of China’s strong government tradition of aggressive intervention in the tech industry.

Many of the clauses in the draft regulation are principles that AI critics are advocating for in the West: data used to train generative AI models shouldn’t infringe on intellectual property or privacy; algorithms shouldn’t discriminate against users on the basis of race, ethnicity, age, gender, and other attributes; AI companies should be transparent about how they obtained training data and how they hired humans to label the data.

At the same time, there are rules that other countries would likely balk at. The government is asking that people who use these generative AI tools register with their real identity—just as on any social platform in China. The content that AI software generates should also “reflect the core values of socialism.” 

Neither of these requirements is surprising. The Chinese government has regulated tech companies with a strong hand in recent years, punishing platforms for lax moderation and incorporating new products into the established censorship regime. 

The document makes that regulatory tradition easy to see: there is frequent mention of other rules that have passed in China, on personal data, algorithms, deepfakes, cybersecurity, etc. In some ways, it feels as if these discrete documents are slowly forming a web of rules that help the government process new challenges in the tech era.

The fact that the Chinese government can react so quickly to a new tech phenomenon is a double-edged sword. The strength of this approach, which looks at every new tech trend separately, “is its precision, creating specific remedies for specific problems,” wrote Matt Sheehan, a fellow at the Carnegie Endowment for International Peace. “The weakness is its piecemeal nature, with regulators forced to draw up new regulations for new applications or problems.” If the government is busy playing whack-a-mole with new rules, it could miss the opportunity to think strategically about a long-term vision on AI. We can contrast this approach with that of the EU, which has been working on a “hugely ambitious” AI Act for years, as my colleague Melissa recently explained. (A recent revision of the AI Act draft included regulations on generative AI.)



Source link

By admin

Malcare WordPress Security