Artificial intelligence applied by very large online platforms and search engines to mitigate systemic online risks
Abstract
In recent years, the use of artificial intelligence for content moderation has become a prevalent practice among major online platforms and search engines. These sophisticated algorithmic systems perform vital functions, namely the rapid detection, restriction and removal of content deemed illegal (or otherwise incompatible with the terms and conditions) on the web. This increasing utilization of artificial intelligence is a response to the exponential growth in the volume of data and the volume and the accelerated pace of content creation and dissemination on the internet. However, its implementation requires caution to avoid misleading results and the consequent negative impact on the constitutionally protected rights and freedoms of digital users. This challenge may be further exacerbated by the need for these technological oligarchs to direct their efforts at mitigating “incorrect or misleading content, including misinformation” due to the systemic risk associated with their services.
Downloads
Funding data
-
Agencia Estatal de Investigación
Grant numbers TED2021-129307A-I00