Algorithmic Feed Design Can Reduce Online Toxicity, Stanford Study Finds
New Stanford study suggests that algorithmic feed design can cool online conflict in meaningful ways. The research shows that political hostility is not inevitable. Instead, platform design choices shape much of what people feel while scrolling.
The team developed a simple browser tool that quietly downranks hostile or antidemocratic posts on X. It does this without deleting posts or asking for help from the platform. As a result, users keep full access to political speech while avoiding the sharpest spikes in hostility.
How the tool shifts online behavior
Researchers from Stanford, Northeastern, and the University of Washington built the tool together. Their goal was clear. They wanted to reduce conflict without silencing anyone. Therefore, they used a large language model to scan posts in real time. The system flagged content that showed extreme hostility, attacks on democratic norms, or calls for violence. When flagged, the post simply moved lower in the feed.
This small change produced noticeable effects. During a 10 day test with 1,200 participants, users who saw fewer hostile posts felt warmer toward the opposing political party. The shift matched the kind of change usually seen nationwide over several years. In addition, these users felt less anger and sadness, which hints at real emotional benefits.
A shift toward user control
The study also highlights growing interest in giving people more control over ranking systems. For example, the research team released its code so others can create tools that reduce harassment or manage anxiety. This transparency supports a healthier digital environment.
The findings offer hope. They show that online division does not have to deepen by default. With thoughtful design and user guided tools, social feeds can encourage healthier engagement instead of fueling conflict.