AI developers no longer have to share safety results even if it may pose risk to US national security or public health.
The idea is that this will allow for more innovation from AI developers.
What do people think about this?
AI developers no longer have to share safety results even if it may pose risk to US national security or public health.
The idea is that this will allow for more innovation from AI developers.
What do people think about this?