"Rightward Shift Just by Watching 'Recommendations' for a Few Weeks? The 'Irreversible' Impact of X Algorithm"

"Rightward Shift Just by Watching 'Recommendations' for a Few Weeks? The 'Irreversible' Impact of X Algorithm"

"Recommendations": Do They "Move" Opinions?

The timeline of social media is not just a list of posts. Which posts catch our eye and which ones get buried can influence our interests, emotions, and even political views—a notion many people intuitively understand. However, there are surprisingly few studies that have experimentally verified "how much algorithms on large-scale platforms can change political attitudes in reality."


The study that became a topic of discussion used X's feed feature in a real-world randomized experiment, showing the potential impact of the differences between "For You (recommendations)" and "Following (chronological)" on political priorities and views on international issues. A key point is that the influence may persist even after switching, not just during short-term "exposure."


What the Study Did: A 7-Week Experiment with About 5,000 Participants

The research team assigned feed displays to two conditions for approximately 5,000 active users in the United States.

  • Recommendation Feed (For You): Arranged in order of likely engagement, including posts from accounts the user does not follow

  • Chronological Feed: Displays posts from followed accounts in chronological order


The experiment period was 7 weeks in 2023. During this time, the participants' political attitudes (what they consider important issues, evaluations of specific political events, views on international conflicts, etc.) and platform behavior (changes in who they follow, etc.) were tracked.


Results: Recommendation Feed Strengthened "Right-Leaning Priorities"

The main result showed that participants who switched from chronological to recommendation feeds tended to prioritize issues often emphasized by the Republican Party (crime, inflation, immigration, etc.). Additionally, changes were observed in their evaluations of specific political events (such as investigations surrounding U.S. politics) in a direction more aligned with conservative and Republican views.


Furthermore, regarding international issues, there was a report of a shift where favorability towards Ukraine weakened, and a relatively pro-Russian index increased concerning the Russia-Ukraine war.


The important point here is that the study did not dismiss the findings as "users were already like that," but demonstrated through random assignment that changing the feed structure alone can shift average attitudes.


Why It Happened: Increase in Right-Leaning Posts and Decrease in News

The research team not only observed changes in attitudes but also compared the content flowing through the feeds. In the recommendation feed, compared to chronological,

  • the proportion of right-leaning content increased

  • posts from traditional news organizations decreased relatively

  • posts from political activists and those with strong opinions were elevated
    This trend was indicated.


In other words, not only is "right-leaning posts more exposed," but the structure where the "common factual foundation" traditionally provided by news media is diluted and activist posts come to the forefront may be the soil for opinion change.


The Scariest Point: "Turning It Off" Doesn't Necessarily Mean Reverting

The reason this study attracted particular attention is here. The recommendation feed not only changes daily exposure but also alters the user's following behavior itself, and that change tends to persist.


While using the recommendation feed, participants were more likely to be guided towards following more right-leaning accounts. Once the following relationships change, even if they switch back to chronological, the set of followed accounts remains altered, so the world (information environment) the user sees doesn't fully "revert."


The study highlighted the perspective that the impact of algorithms goes beyond just "reordering at that moment" and extends to reshaping the user's information environment.


Reactions on Social Media: A Mix of Welcome, Caution, and Opposition

 

There are four main currents in the reactions on social media regarding this study.


1) Researchers and Science Communicators: The Value of Demonstrating the "Obvious" Experimentally
On platforms like Bluesky, voices highlight the significance of demonstrating what was intuitively understood through large-scale field experiments, and how it counters claims that attitudes don't change. There's a stronger appreciation for the increase in evidence usable for policy discussions than surprise.


2) Media and Commentators: Raising Issues with the Structure Where "News Declines"
Many reactions focus on the point that recommendations relatively sink news organizations and elevate activist posts. Beyond political bias, there's a concern that the quality and verifiability of information decline, distorting attitude formation.


3) General Users: "It Feels That Way" and "Aren't Other Social Media the Same?"
On Reddit, reactions based on "rules of thumb" appear, such as "not surprised" and "Facebook seems similar," while others say, "many people don't use it for political purposes" and "be cautious about extrapolating to those who don't see political posts." In other words, alongside empathy for the research results, the distance from their own usage reality is also being discussed.


4) Platform Defense and Opposition: Counterarguments Regarding the Interpretation of "Rightward Shift"
Counterarguments include "becoming conservative isn't inherently bad," "the study is in the context of U.S. politics and doesn't directly apply to other countries," and "chronological feeds are biased too." Particularly, discussions linking recommendation feeds to "censorship" and caution against regulatory strengthening are mixed, showing how the interpretation of the study becomes a political issue.


In the Japanese-speaking world, introduction articles from overseas reports are prompting discussions like "recommendations sharpen debates, as felt" and "chronological reversion isn't a panacea." In Japan, in addition to the right-left axis, discussions often involve international issues, conspiracy theories, and flaming mobilization, with a tendency to focus on **"who benefits from the design."**


What Can Be Said from Here: "Design and Transparency" Over Individual Settings

The question posed by this study goes beyond individual usage tweaks. Of course, self-defense measures like choosing chronological, reviewing follows, and consciously adjusting exposure to political topics are possible. However, the essence is that the issue surpasses the "range offset by individual effort," focusing on how platform design shapes the information environment of society.


In particular, as long as it's unclear

  • what the algorithm considers a "good reaction" and learns from

  • which categories of posts are elevated/suppressed

  • how much follow behavior is being guided
    we will be left reacting only to the results (division, radicalization, bias).


Researchers and some commentary articles argue that algorithm transparency and accountability should be institutionally demanded. If it influences societal decision-making as much as public infrastructure, the idea of not leaving it as a **"black box"** becomes necessary.


Caution: Not a Universal Conclusion

On the other hand, there are some reservations when reading the study. For example, the subjects are active users in the U.S., and it may not directly apply to countries with different political cultures and issue structures. Also, the effects are average values, and the impact may vary depending on user attributes and usage purposes.


Nevertheless, the significance of demonstrating that "feed differences move attitudes and behaviors" on a real large-scale platform, and that "effects can persist after switching," is substantial. The discussion should progress from here to "what kind of recommendations work under what conditions for which people."



Source