Artificial intelligence applied by very large online platforms and search engines to mitigate systemic online risks

Authors

DOI:

https://doi.org/10.20318/cdt.2025.9331

Keywords:

systemic risks, Artificial Intelligence, algorithm, online search engines, online platforms

Abstract

In recent years, the use of artificial intelligence for content moderation has become a prevalent practice among major online platforms and search engines. These sophisticated algorithmic systems perform vital functions, namely the rapid detection, restriction and removal of content deemed illegal (or otherwise incompatible with the terms and conditions) on the web. This increasing utilization of artificial intelligence is a response to the exponential growth in the volume of data and the volume and the accelerated pace of content creation and dissemination on the internet. However, its implementation requires caution to avoid misleading results and the consequent negative impact on the constitutionally protected rights and freedoms of digital users. This challenge may be further exacerbated by the need for these technological oligarchs to direct their efforts at mitigating “incorrect or misleading content, including misinformation” due to the systemic risk associated with their services.

Downloads

Download data is not yet available.

Downloads

Published

2025-03-19

Issue

Section

Estudios

How to Cite

Artificial intelligence applied by very large online platforms and search engines to mitigate systemic online risks. (2025). CUADERNOS DE DERECHO TRANSNACIONAL, 17(1), 308-328. https://doi.org/10.20318/cdt.2025.9331