Daniel González Herrera
banner
danielglez.bsky.social
Daniel González Herrera
@danielglez.bsky.social
Associate Professor, Int & EU law. University of Salamanca
We must do everything possible to ensure that this debate is not guided by our baser passions and correctly identify those who try to play on our emotions for political gain. And remember that, in the vast majority of issues, there is still more that unites us than separates us.
March 19, 2025 at 7:54 AM
For democracy to survive, it is necessary to redefine the meaning of public debate: we must once again be capable of dialogue with those who think differently to us and make a conscious effort to empathise in order to try to understand their points of view.
March 19, 2025 at 7:54 AM
'Maud knew the proprietor of the [Daily Mail]. Like all great press men, he really believed the drivel he published. His talent was to express his readers’ most stupid and ignorant prejudices as if they made sense, so that the shameful seemed respectable. That was why they bought the paper.'
March 19, 2025 at 7:54 AM
There is a passage in the book ‘World Without End’ in which Ken Follet, with his usual narrative skill, gives voice to this concern:
March 19, 2025 at 7:54 AM
However, for a large part of the population the press is no longer a reliable source for developing critical knowledge. Much worse: populist movements and leaders have created alt-media that do not feel bound by the ethical standards of journalism and are therefore particularly dangerous.
March 19, 2025 at 7:54 AM
At other times, this role was assumed by ‘opinion leaders’ who found an echo in the free press. This mechanism was particularly apt because, although imperfect, the serious press usually contains self-correcting mechanisms.
March 19, 2025 at 7:54 AM
This is a particularly hard difficulty to tackle, given that in today’s big democracies personal interactions are necessarily limited in order to produce the democratic dialogue and debate essential for the system to survive.
March 19, 2025 at 7:54 AM
In addition, there are issues that have to do with how humans react when we are confronted with information that challenges our preconceived views. The reaction is usually defensive. This response tends to be tempered in offline interactions, but tends to become more radical in online ones.
March 19, 2025 at 7:54 AM
This idea, defended by Rousseau in ‘Emile’, can no longer be mantained in a world in which immediate access to a large part of human knowledge via the internet has not only not produced a more tolerant society with a greater critical spirit, but has, on occasions, led to the exact opposite result.
March 19, 2025 at 7:54 AM
On the other hand, however, the idea that more education or more information is the panacea for avoiding bias or discriminatory attitudes has been abandoned for many years as naive.
March 19, 2025 at 7:54 AM
It is possible that an active effort in digital media literacy aimed at recognising, questioning and confronting these less visible forms of censorship is an appropriate channel for developing critical attitudes towards the information they receive from artificial intelligence models.
March 19, 2025 at 7:54 AM
In models by autocratic systems, censorship is more evident and expected, which allows users to be aware of the dangers. On the contrary, the diffuse and subtle nature of self-censorship in Western models can catch users off guard, which hinders their ability to react appropriately to these biases.
March 19, 2025 at 7:54 AM
However, Western-developed LLMs incur comparable self-censorship, like in their inability to criticise Donald Trump or other Western leaders. Although the two forms of censorship are not entirely equivalent in terms of their magnitude or nature, the dangers of the latter should not be minimised.
March 19, 2025 at 7:54 AM
Some experts have pointed out the danger of language models such as DeepSeek, which is unable to present in a balanced way information that could be critical or damaging to the ruling party in China (for example, the Tiananmen events of 1989).
March 19, 2025 at 7:54 AM
The most worrying thing about artificial intelligences — especially LLMs — is their tendency to agree with humans on everything, replicating the worst practices of the echo chambers of social networks and creating information bubbles that feed confirmation biases and limit democratic debate.
March 19, 2025 at 7:54 AM