>

The Risks of AI Content Personalisation

>Mern:


Firstly, content personalisation is a strategy that involves using data to tailor and direct content to specific audiences or individuals. It can range from using broad demographic data such as age ranges and nationality to specific, personal information like religion, sexuality or political beliefs.

And! It can have exceptional results in marketing and user retention contexts, which has made it an attractive proposition for businesses. The addition of AI has allowed this process to be automated, and resultantly, many platforms employing AI-driven content personalisation to monitor users of their platforms are incentivised to collect as much information and behavioural data as possible on their users in order to maximise the effectiveness of their personalisation algorithms. Collected data is often stored in a user profile and used to inform future automated content personalisation, e.g via recommender systems that direct or show users specific content or deprioritising content so that it is not shown to users. Often these systems are applied with the goal of maximising user retention, engagement, or driving sales. However, they have also been used for political messaging, manipulation or ‘nudging’ users towards certain beliefs, and have incidentally resulted in the creation of filter bubbles and echo chambers in many online spaces (specifically social media) wherein personalised content systems simply prioritise information that aligns with user beliefs and filter out conflicting beliefs and information. In this way, they have had a noted radicalising effect on users due to the incredible popularity and ubiquity of AI-employed social media worldwide and the associated decline of traditional news media. The increasing digitisation of society and the increasingly technological inclinations of each generation has enabled corporations, providers, controllers, deployers and techno-digital infrastructure holders to have incredible epistemic control with little oversight, this presents a massive concern for freedom of thought, opinion and general individual autonomy. 

In legislating the matter, one of the EU’s major goals has been mandating transparency, and the GDPR ‘Right to Explanation’ suite of articles is one approach to this, as much of the logic and decision-making behind automated content personalisation is hidden to users themselves. Allowing users to see why they’re being shown a particular piece of content may at least enable them to take it with a grain of salt, but transparency alone may not be sufficient if users lack the ability to actually change the parameters that determine what they see. The Digital Services Act’s address to ‘recommender systems’ also pushes for clear explanations and gives users some control over their own experience, though the level of control may be limited and it remains to be seen whether these controls are meaningful in allowing users to shape their experiences. However, it acknowledges the fundamental risk posed by the largest (VLOP – Very Large Online Platform) platforms and the need for systemic oversight there, and the AI Act continues that trend by denoting certain AI systems as ‘high risk’ and ‘systemic risk’, which has the potential to significantly improve transparency and accountability in AI-driven content personalization. Its’ effectiveness will however depend on robust enforcement and the development of clear standards.

The throughline of criticism is that despite the interplay of these three legislations all imposing transparency requirements, they fail to directly address the root cause of the threat towards individual autonomy which is the lack of meaningful user control over the design and operation of these systems. While transparency is a necessary first step, it’s insufficient. Knowing why something is shown doesn’t necessarily give you the power to change it. The core issue is the power imbalance: platforms design these systems, and users are largely subject to them. Therefore, the missing piece of the puzzle, and the throughline of the criticism, is the need for mechanisms that empower users to actively shape their online experiences and reclaim some of that lost autonomy.

>End Of Post

;

Written by

;

Comments

Leave a comment