Beyond Screen Time: How Algorithms Shape Teens’ Exposure to Violence
Picture this: a 13-year-old comes home from school, pulls out their phone, and scrolls through short videos. It starts innocently — dance clips, sports highlights, comedy skits. But within minutes, their feed fills with footage of graphic fights, weapons, threats and even murders. They never searched for it, yet the platform’s invisible systems decided it was what they should see.
This is not a rare story. For today’s adolescents, social media is the backdrop of daily life: how they connect, express themselves, and explore their developing identity. But emerging research reveals a hidden danger. It isn’t only the violent content itself that causes harm — it’s the algorithms that amplify it, making disturbing material difficult to avoid.
As a child psychologist, I see this issue as both a mental health concern and a community responsibility. If parents and educators aren’t there to protect our youth, who is? There is no debate that the hidden influence of algorithms is toxically ubiquitous. Every scroll, swipe and click online is guided by mathematical systems designed to predict what will hold our attention. These digital curators don’t necessarily show what’s healthiest or most uplifting; their goal is engagement.
And violent videos meet that goal all too well. They are dramatic, emotionally charged and nearly impossible to ignore. This became painfully clear to me with the recent assassination of political activist Charlie Kirk and the murder of Ukrainian refugee Iryna Zarutska. Several of my teen clients handed me their phones, saying things like, “Hey Dr. O, check this out,” or “Did you see this?” They showed me graphic videos of the killings in real time. Not one expressed emotional anguish over what they had seen—liberal or conservative, they recounted the images with almost casual detachment. I don’t believe that’s because they didn’t care, but because repeated exposure to violent imagery has dulled their natural response.
That desensitization is exactly the point of concern. Each click on a violent video unleashes a cascade of similar recommendations, and for young people still developing emotional regulation, the effect can be overwhelming. Recent audits confirm that even when children do not actively search for violent material, algorithms often push them toward suggestive or harmful images.
A large survey of 10,000 adolescents in England and Wales found that 70 percent had encountered real-world violent content online in the past year, yet only 6 percent reported actively searching for it. Far more said the platforms themselves had recommended it. Other studies confirm the heightened vulnerability of teenagers: during adolescence, the brain’s emotional centers develop faster than impulse control or critical reasoning. That imbalance leaves them highly sensitive to shocking media but less equipped to analyze or resist it.
The consequences are not abstract. Teens exposed to weapon-related videos report feeling less safe in their neighborhoods, with many reducing time outdoors. Research also links repeated exposure to higher rates of anxiety, depression and disrupted sleep. Over time, adolescents can even become desensitized, normalizing aggression or adopting it themselves.
Several factors make the harm worse:
- Repetition, which can normalize violence
- Unexpected exposure, which blindsides teens who never sought it
- Sleep disruption, with frightening imagery lingering into the night
- Distorted perceptions, leading young people to believe the world is more dangerous than it is
What can parents do? Shielding teens entirely may feel impossible, but small, consistent efforts help.
- Curate and supervise content. Filters help, but co-viewing and discussions help more. Ask questions like: “How did that make you feel?” or “Why do you think this showed up?”
- Teach media literacy. Explain how algorithms work—that platforms prioritize emotional content, not necessarily healthy content. Encourage teens to ask: “What does this video want from me?”
- Set content boundaries, not just time limits. Rules about violent material matter just as much as hours spent online.
And perhaps most importantly, remember that modeling is powerful. Teens notice when parents limit their own exposure to violent or sensationalized content.
Still, families cannot face this problem alone. The widespread risks algorithms pose to adolescents demand broader accountability. Platforms and policymakers must establish guardrails with stronger safety features, transparent systems, and meaningful regulation. U.S. Surgeon General Dr. Casey Means has already emphasized the harms of engagement-driven design, urging rethinking of features that undermine youth wellbeing. It is time for tech leaders and lawmakers to act with urgency instead of reaction.
This responsibility extends beyond social media feeds. Artificial intelligence, too, has introduced unanticipated dangers. Earlier this year, 16-year-old Adam Raine tragically ended his life after turning to a chatbot for guidance on suicide. That AI not only failed to redirect him to safety but actively supported his planning, even assisting with a note. His story underscores the need for stronger oversight across digital platforms of every kind.
For years, the conversation around youth and technology has centered on how much time teens spend online. But the greater danger lies in what they are exposed to — and the forces deciding that for them. If algorithms continue to prioritize clicks over care, our children will pay the price in their mental health and sense of safety. Protecting them requires a shift in priorities: parents guiding with awareness, policymakers demanding transparency, and platforms taking responsibility for what they amplify. Only then can we create a digital environment that supports — not sabotages — the healthy development of our next generation.
Michael Oberschneider, Psy.D. “Dr. Mike” is a clinical psychologist in private practice.
He can be reached at 703-723-2999, and is located at 44095 Pipeline Plaza, Suite 240, Ashburn.