Type something to search...

Risk Mitigation

Risk mitigation refers to the actions and measures that providers of core platform services take to reduce or eliminate identified systemic risks that could harm electoral processes, fundamental rights, or public discourse. Under the Digital Services Act, very large online platforms (VLOPs) and very large online search engines (VLOSEs) must identify potential risks and implement reasonable, proportionate, and effective measures to address them.

Legal Basis

"Providers of very large online platforms and of very large online search engines shall put in place reasonable, proportionate and effective mitigation measures, tailored to the specific systemic risks identified pursuant to Article 34, with particular consideration to the impacts of such measures on fundamental rights."

— Article 35(1), Regulation (EU) 2022/2065 (Digital Services Act)

The Commission may also issue guidelines on specific risk mitigation measures, as outlined in Article 35(3) of the Digital Services Act, particularly for risks affecting electoral processes.

Why It Matters

Risk mitigation is central to protecting democracy and fundamental rights in the digital environment. VLOPs and VLOSEs must assess and address actual or foreseeable negative effects on civic discourse, electoral processes, public health, minors, and other areas of public concern. Without effective mitigation, platforms can amplify illegal content, disinformation, and foreign information manipulation.

The obligation applies specifically to designated platforms and search engines with over 45 million monthly active users in the EU. These providers must tailor their measures to the specific risks their services pose, considering factors like algorithmic amplification, content moderation capacity, and the design of their recommender systems.

During electoral periods, risk mitigation becomes especially critical. Providers should implement enhanced measures such as dedicated monitoring teams, improved fact-checking partnerships, transparent political advertising policies, and crisis response mechanisms to protect the integrity of elections and referendums.

Key Points

  • Tailored approach: Mitigation measures must be specific to the risks identified in each provider's risk assessment, not generic compliance actions.
  • Proportionality: Measures should balance effectiveness with respect for fundamental rights, including freedom of expression and privacy.
  • Electoral focus: Special measures are often needed during pre-electoral, electoral, and post-electoral periods to address heightened risks.
  • Multiple tools: Effective mitigation typically combines content moderation, algorithmic adjustments, user education, transparency tools, and cooperation with authorities and fact-checkers.
  • Continuous improvement: Providers must adapt measures based on audits, regulatory feedback, and evolving threats like generative AI content.
  • Documentation: All mitigation measures must be documented and reported annually, with independent audits verifying their effectiveness.

Risk Mitigation vs. Risk Assessment

Risk assessment is the process of identifying and analyzing potential systemic risks arising from a platform's design, function, or use. Risk mitigation is the subsequent step: implementing concrete measures to address those identified risks. Assessment is diagnostic; mitigation is prescriptive.

For example, a risk assessment might identify that a recommender system amplifies election disinformation. The corresponding risk mitigation could include adjusting algorithmic parameters, providing users with alternative ranking options, partnering with fact-checkers, or implementing warning labels on unverified election claims.

Both processes are mandatory under Article 34 (assessment) and Article 35 (mitigation) of the Digital Services Act and must be carried out at least annually or when significant changes occur.

Related Terms

  • Systemic Risk
  • Risk Assessment
  • Very Large Online Platform (VLOP)
  • Very Large Online Search Engine (VLOSE)
  • Digital Services Coordinator
  • Recommender System
  • Content Moderation
  • Crisis Response Mechanism
  • Electoral Process
  • Fundamental Rights

Risk mitigation: Core Facts

Status
Active Definition
Verified
2026-03-07

Related

Very transparent. Every political ad will be labelled, linked to a transparency notice with detailed information, and online ads will be searchable in a central European repository.
The Network coordinates election-related cooperation between member states. National contact points for TTPA enforcement should be members of this network where possible.
Election campaigns will need to ensure all paid advertising includes proper labels and transparency notices. Sponsors must be prepared to provide required information to all service providers.
Several major platforms currently do not allow paid political advertising, including some large social networks. This limits where political actors can place paid online advertisements.
The TTPA applies from 10 October 2025. Member States had until 10 April 2025 to designate competent authorities, and the Commission must provide label templates by 10 July 2025.
Publishers must ensure completeness and accuracy of certain information but are not required to verify all sponsor claims. They must correct manifestly erroneous information when they become aware of it.
Yes. When a hosting provider and a website both display an ad, both are considered publishers with responsibility for their specific services. Contracts should clarify how they share compliance duties.
If a publisher removes or disables access to a political ad due to illegality or terms violations, they must still provide access to the transparency information for the full seven-year retention period.