27 Aug 2025, 02:08
Regulators Warn Companies About Child Safety in Artificial Intelligence
- Attorneys General from 44 states have warned the company AI about protecting children.
- The interaction of AI with youth raises significant concerns about potential harm.
- Investors are concerned about the safety assessments of the AI company.
Attorneys General from 44 U.S. states sent a letter to the leaders of 13 companies engaged in artificial intelligence. They expressed concern regarding child safety, stating that they are prepared to use all their authority to protect youth from potential harms that could arise from AI products.
The letter emphasizes that companies must avoid situations that could harm children. They provided examples of how AI interacts with youth in unsafe ways. For instance, recently it was reported that Meta allows its AI to interact with children in a romantic context, which raises additional concerns.
Moreover, research shows that AI chatbots can provide harmful advice, especially in matters related to suicide. Attorneys General have called for the implementation of clear norms for regulating AI activities to prevent the recurrence of harmful incidents on social media.
Currently, the market for artificial intelligence is experiencing an increase in investments. However, experts warn that some companies may be overvalued, which increases risks for investors. Research from MIT has shown that 95% of projects utilizing AI in business do not yield significant financial results.
Tags: USA/Technology/AI