- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
cross-posted from: https://lemmy.world/post/25011462
SECTION 1. SHORT TITLE
This Act may be cited as the ‘‘Decoupling America’s Artificial Intelligence Capabilities from China Act of 2025’’.
SEC. 3. PROHIBITIONS ON IMPORT AND EXPORT OF ARTIFICIAL INTELLIGENCE OR GENERATIVE ARTIFICIAL INTELLIGENCE TECHNOLOGY OR INTELLECTUAL PROPERTY
(a) PROHIBITION ON IMPORTATION.—On and after the date that is 180 days after the date of the enactment of this Act, the importation into the United States of artificial intelligence or generative artificial intelligence technology or intellectual property developed or produced in the People’s Republic of China is prohibited.
Currently, China has the best open source models in text, video and music generation.
While unfettered access is bad in general, DeepSeek takes it a step farther: the Mixture of Experts approach in order to reduce computational load, is great when you know exactly what “Experts” it’s using, not so great when there is no way to check whether some of those “Experts” might be focused on extracting intelligence under specific circumstances.
I agree that you can’t know if the AI has been deliberately trained to act nefarious given the right circumstances. But I maintain that it’s (currently) impossible to know if any AI had been inadvertently trained to do the same. So the security implications are no different. If you’ve given an AI the ability to exfiltrating data without any oversight, you’ve already messed up, no matter whether you’re using a single AI you trained yourself, a black box full of experts, or deepseek directly.
But all this is about whether merely sharing weights is “open source”, and you’ve convinced me that it’s not. There needs to be a classification, similar to “source available”; this would be like “weights available”.