How Personalized Algorithms Are Controlling Your World
What you see (and don’t see) every day plays a role in your interests. Here’s how algorithmic personalization decides for you.
Let’s talk about algorithmic personalization. Does anyone remember when their Facebook feed was in chronological order? I distinctly recall the day Facebook implemented a new algorithm, throwing my feed into disarray. Since then, we’ve grown accustomed to platforms prioritizing popularity, personalization and engagement.
In a world of unlimited choices, personalized algorithms have shifted us from choice overload to effectively having no choice at all. While these algorithms are meant to streamline the user experience, keep users engaged and optimize marketing strategies, where might they fall short?
Starting positively, AI-powered algorithms have revolutionized marketing, making it far more efficient. We've moved from clumsy, manual analysis that broadly categorized consumers to machine learning processes that identify specific behaviors, enabling proactive rather than reactive strategies. This shift could render generic campaigns obsolete as consumers demand unique, tailored experiences. More data allows for more accurate targeting, enhancing customer experience, return on investment (ROI) and scalability, while enabling real-time business decisions.
However, these benefits come with costs. Data privacy concerns and the potential over-reliance on AI, resulting in the loss of a “human touch” in advertising, are real challenges that businesses are facing as they integrate AI into their workflow. Calls for transparency and a hybrid (human-machine) model are potential solutions to mitigate these effects.
In news content, algorithms also play a significant role. They sift through vast amounts of data to identify user preferences and tailor content accordingly, a practice common not just in news feeds, but also on social media platforms. On the positive side, personalized algorithms optimize for trending material and save users from manually sorting through content. In these same contexts, AI assists developers and content creators by curating headlines with search engine optimization (SEO), creating illustrations, editing text, analyzing user habits, generating interactive features, and automating tasks.
Of course, every rose has its thorn, and these algorithms can inadvertently spread misinformation by prioritizing clicks and views. They can also create dangerous echo chambers and filter bubbles, preventing users from encountering content that contradicts their beliefs. Social and news sites are examples of where transparency in how these algorithms function, in addition to addressing ethical considerations of privacy and bias, are crucial.
So, this algorithmic personalization stuff all seems to be fine and dandy, sans a few kinks that still need to be worked out. But is this the view shared by those profiting from the algorithms, or those whose consumption habits are being controlled?
A study by Anastasia Kozyreva et al. surveyed public perceptions of algorithmic personalization and data collection in Germany, Great Britain, and the United States. The results showed general objection to the collection and use of sensitive personal data, especially for political advertising. Ironically, there was an acceptability gap: people accepted personalized services more than the data collection needed to power these services. Moreover, in Germany and Great Britain, people objected to the personalization of news sources, while in the United States, this was not the case. This study highlights that attitudes toward personalization are complex, especially in the realm of political advertising.
So, where does the push for personalization come from? While I enjoy a curated social or news feed, I'm aware of the potential harmful effects on both individual and societal levels. Companies approach personalization through optimization, control and self-interest, applying economic theories to humans. Travis Greene discusses this perspective in a piece on Medium, challenging the notion that personalization promotes freedom because users voluntarily interact with platforms, leaving behind valuable data. Greene argues that economic motivations can lead companies to obscure the negative social and psychological effects of personalization, reflecting the conflict between humanistic and economic visions.
Clearly, algorithmic personalization still has room for improvement. Issues like intrusive ads and friends' content getting lost in the algorithm still persist. According to a piece by Jarno M. Koponen on TechCrunch, these issues arise from gaps in data, computing power, user interests, available actions and content. Consider the following examples.
Data gap: Algorithms only have the data users provide through their interactions.
Computing gap: Limited computing power and machine learning capabilities.
Interest gap: Users often have conflicting interests with third parties or advertisers.
Action gap: Platforms offer limited actions, which may not cover all user desires.
Content gap: Not every platform has content that suits every user’s preferences.
So what are the solutions to these issues, both technical and ethical? Are there any?
On the technical front, solutions include human-centered algorithmic personalization, updating user interfaces for greater transparency (why am I being shown what I'm seeing?) and offering more customizable reactions. Personalized algorithms should also balance relevant, novel, new and old content, improving as they understand users better.
As algorithmic personalization has become ubiquitous and inescapable, it is critical to recognize that while it can enhance user experiences, it can also create echo chambers and filter bubbles. Personalization extends beyond news and social media feeds to music and television streaming platforms as well. An article by James Skyes on Humans of Globe reminds us that AI personalization actively shapes our worldviews— a paradox where personalization is based on our interests, but then shapes our future interests. These narrow lenses of viewing the world may not actually be beneficial in the long run! As time goes on and technology continues to exponentially develop, we must advocate for transparency and greater user control over online experiences to prevent monetization incentives from driving companies to maximize user engagement at all costs.
Best,
Nina for the Don’t Count Us Out Yet Team