Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Top officials in tens of states saw how generation Ai Chatbots And the characters, if handled badly, can be bad for children. And they have a strict warning for the industry: “If you knowingly inflict damage to children, answer it.”
That message is clear in a letter They sent these weeks of 44 state lawyers in the head 13 and companies. AGS said they were writing to speak executives that “use every aspect of our authority to protect children from exploitation predatory artificial intelligence.”
She takes care of AI’s influence on children for some time, but interest in the last week is an amplifier. AGS cited specially Recent report From Reuters who showed that the mete’s guidelines allowed AI to hire children in conversations that were “romantic or sensual”. The company told Reuters that examples were cited “wrong and inconsistent” with companies that prohibit content sexualizing children.
The target immediately responded to a comment request.
AGS said questions were not limited to the target. “In the short history of parisocial relationships of Chatbot, we have repeatedly seen companies depict an inability or apathy according to basic child protection obligations,” they wrote.
Watch this: How are you talking to chatgpt important things. Here’s why
The risks of relationships and treacherous interactions with AI chatbots are increasingly clear. In June, the American psychological association issued warning Referring to the holder around ai and use for teenagers and young adults, saying parents should help their children widely use the tools widely. Quick expansion of use Ai Chatbots as “therapists” It has increased the possibility of people who receive harmful advice in interaction when special vulnerable. Study published this week found that large language models are inconsistent in responding Suicide questions.
At the same time, there are few real rules around which AI developers can and cannot and how these tools can work. Move to stop states to carry out laws and rules around AI Failed in Congress earlier this year but still has more There is no federal framework Because as AI can be done safely. MPs and proponents, such as AGS, in this year’s letter, said they wanted to avoid a free atmosphere similar to social media, but would clearly recorded clear rules. President Trump Action Plan AIPublished in July, concentrated on reducing regulations for AI companies, not introducing new ones.
Read more: Ai Essentials: 29 ways you can do Gen AI for you, according to our experts
State AGS said that things would take things into their own hands.
“You will be responsible for your decisions,” they wrote. “Social media platforms caused significant damage to children, partly, because government guards quickly did their job. Potential and dwarfs, the dwarf of social media. We wish you all success in the race.” We look after ai. “.
If you feel like you or someone you know, in the immediate danger, call 911 (or local emergency room near you) or go to the ambulance to get instant help. Explain that this is a psychiatric urgent situation and ask someone who is dressed for these types of situations. If you struggle with negative thoughts or suicidal feelings, resources are available for help. In the United States, call the National All Line for Suicide Prevention at 988.
.