A brand new report by the American Psychological Affiliation calls on AI builders to construct in options to guard the psychological well being of teenybopper and younger adults.
JUANA SUMMERS, HOST:
A brand new well being advisory calls on builders of synthetic intelligence and educators to do extra to guard younger individuals from manipulation and exploitation. NPR’s Rhitu Chatterjee experiences.
RHITU CHATTERJEE, BYLINE: Programs utilizing synthetic intelligence are already pervasive in our more and more digital lives.
MITCH PRINSTEIN: It is the a part of your e mail software that finishes a sentence for you, or spell checks.
CHATTERJEE: Mitch Prinstein is chief of psychology on the American Psychological Affiliation and one of many authors of the brand new report.
PRINSTEIN: It is embedded in social media, the place it tells you what to look at and what associates to have and what order it’s best to see your mates’ posts.
CHATTERJEE: It isn’t that AI is all dangerous.
PRINSTEIN: It will possibly actually be an effective way to assist begin a venture, to brainstorm, to get some suggestions.
CHATTERJEE: However teenagers and younger adults’ brains aren’t absolutely developed, he says, making them particularly susceptible to the pitfalls of AI.
PRINSTEIN: We’re seeing that youngsters are getting data from AI that they consider when it isn’t true. They usually’re growing relationships with bots on AI, and that is probably interfering with their real-life, human relationships in ways in which we received to watch out about.
CHATTERJEE: Prinstein says there are experiences of youngsters being pushed to violence and even suicidal conduct by bots, and AI is placing younger individuals at a higher danger of harassment.
PRINSTEIN: You need to use AI to generate textual content or photos in methods which are extremely inappropriate for youths. It may be used to advertise cyberbullying.
CHATTERJEE: That is why the brand new advisory from the American Psychological Affiliation recommends that AI instruments ought to be designed to be developmentally acceptable for younger individuals.
PRINSTEIN: Have we thought concerning the ways in which youngsters’ brains are growing, or their relationship expertise are growing, to maintain youngsters protected, particularly in the event that they’re getting uncovered to actually inappropriate materials or probably predators?
CHATTERJEE: For instance, constructing in periodic notifications into AI instruments that remind younger individuals they’re interacting with a bot or strategies encouraging them to hunt out actual human interactions. Prinstein says that educators may help defend youth from harms of AI. He says faculties are simply waking as much as the harms of social media on youngsters’ psychological well being.
PRINSTEIN: And we’re a bit bit taking part in catch-up. I believe it is actually necessary for us to do not forget that we have now the facility to alter this now, earlier than AI goes a bit bit too far and we discover ourselves taking part in catch-up once more.
CHATTERJEE: Rhitu Chatterjee, NPR Information.
Copyright © 2025 NPR. All rights reserved. Go to our web site phrases of use and permissions pages at www.npr.org for additional data.
Accuracy and availability of NPR transcripts could fluctuate. Transcript textual content could also be revised to right errors or match updates to audio. Audio on npr.org could also be edited after its unique broadcast or publication. The authoritative file of NPR’s programming is the audio file.





