Paris – The Forum on Information and Democracy and Reporters without Borders (RSF) provide evidence-based contributions to the ongoing consultation of the European Commission on the draft Guidelines for Providers of Very Large Online Platforms and Very Large Online Search Engines on the Mitigation of Systemic Risks for Electoral Processes under Article 35 of the Digital Services Act. These guidelines will apply to the 19 services designed as very large online platforms (VLOPs) and very large online search engines (VLOSEs) in the EU.
These guidelines are a step in the right direction. They establish very specific measures and best practices to be implemented by VLOPs and VLOSEs to reduce the systemic risks of their services to the integrity of electoral processes. The Forum on Information and Democracy notes that they are in line with recommendations put forward by the Forum in its recent policy frameworks.
To strengthen the effectiveness of the guidelines and ensure they incorporate state-of-the art knowledge, the Forum on Information and Democracy and RSF have advised the EU, inter alia, to:
- Clearly articulate that the purpose of the guidelines is not only to address the systemic risks but to build an information environment that provides access to reliable and diverse information to ensure a fair and democratic election process.
- Include by reference the tools that already exist to provide users with more contextual information on the content and accounts they engage with, notably C2PA standards to identify authenticity and provenance of information and the Journalism Trust Initiative (JTI) which enables users to assess the trustworthiness of news providers. The JTI is already referenced in the European Media Freedom Act (EMFA) – recital 33.
- Strengthen the third party scrutiny and research provision by mandating access for researchers to conduct A/B testing and test algorithms in “accountability sandboxes” to independently assess the risk mitigation measures developed by providers of VLOPs and VLOSEs.
- Oblige VLOPs and VLOSEs deploying AI systems to conduct impact assessments to check for bias – including diversity, representation, inaccuracies and misrepresentation in different languages of their AI systems pre-deployement and on a continuous basis.
These recommendations build mainly upon the Policy Brief “Protecting Democratic Elections through Safeguarding Information Integrity” and the Policy Framework “AI as a Public Good: Ensuring Democratic Control of AI in the Information Space” recently published by the Forum on Information and Democracy to address information integrity in the AI and digital age to ensure electoral integrity.
[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]