Major technology companies like Google and Facebook parent company Meta will need to control their platforms more strictly to better protect European users from hate speech, misinformation and other harmful online content under landmark EU legislation approved early Saturday.
EU officials endorsed the agreement in principle on the law on digital services after lengthy final negotiations that began on Friday. The law will also force technology companies to make it easier for users to mark problems, ban online ads targeting children and allow regulators to penalize non-compliance with billions in fines.
The Digital Services Act, one half of a revision of the 27-nation bloc’s digital rulebook, helps cement Europe’s reputation as the global leader in efforts to rein in social media companies and other digital platforms.
With the DSA, the time when large online platforms behave as if they are ‘too big to nurture’ is coming to an end, “said EU Internal Market Commissioner Thierry Breton. EU Commission Vice-President Margrethe Vestager added that with today’s agreement, we ensure that platforms are held accountable for the risks their services may pose to society and citizens. ” where lobbyists representing Silicon Valley’s interests have largely succeeded in keeping federal lawmakers at bay.
While the Department of Justice and the Federal Trade Commission have filed major antitrust cases against Google and Facebook, Congress remains politically divided in its efforts to address competition, online privacy, disinformation and more.
EU new rules should make technology companies more responsible for content created by users and enhanced by the algorithms of their platforms.
The largest online platforms and search engines, defined as having more than 45 million users, will be subject to additional scrutiny.
Breton said they want plenty of sticks to back up their laws, including effective and dissuasive fines of up to 6% of a company’s annual global revenue, which for large technology companies would amount to billions of dollars. Repeated offenders may be excluded from the EU, he said.
The preliminary agreement was reached between the European Parliament and the bloc’s member states. It still needs to be officially stamped by these institutions, which is expected after summer, but it should not pose any political problem. The rules will not take effect until 15 months after this approval, or on 1 January 2024, whichever is later.
“DSA is nothing short of a paradigm shift in technology regulation. It’s the first major attempt to set rules and standards for algorithmic systems in digital media markets,” said Ben Scott, a former tech policy adviser to Hillary Clinton, now CEO. director of advocacy group Reset.
The need to regulate Big Tech more effectively came into sharper focus after the 2016 US presidential election, in which Russia used social media platforms to try to influence voters. Technology companies like Facebook and Twitter promised to crack down on disinformation, but the problems have only gotten worse. During the pandemic, misinformation about health flourished, and again, companies were slow to act, cracking down after years with a host of anti-vaccine fakes to thrive on their platforms.
Under EU law, governments will be able to ask companies to remove a wide range of content that will be considered illegal, including material that promotes terrorism, child sexual abuse, hate speech and commercial fraud. Social media platforms like Facebook and Twitter would need to provide users with tools to mark such content in an “easy and effective way” so that it can be removed quickly. Online marketplaces like Amazon would have to do the same for risky products, such as counterfeit sneakers or unsafe toys.
These systems will be standardized to work in the same way on any online platform.
Germany’s justice minister said the rules would ensure freedom of expression online by ensuring websites can be made to review decisions about deleting posts. At the same time, they are committed to preventing their platforms from being misused, Marco Buschmann said.
“Death threats, aggressive insults and incitement to violence are not expressions of freedom of speech, but rather attacks on free and open discourse,” he said.
Technology companies that had furiously lobbied Brussels to dilute the legislation reacted cautiously.
Twitter said it would review the rules in detail “and that it supports smart, forward-looking regulation that balances the need to tackle online harm with open internet protection.” TikTok said it awaits the full details of the law, but we support its goal of harmonizing the approach to online content issues and welcome the DSA’s focus on transparency as a means of accountability. Google said it looks forward to “working with policy makers to get the remaining technical details right to ensure the law works for everyone.” Amazon referred to a blog post from last year that said it welcomed measures to increase confidence in online services. Facebook did not respond to a request for comment.
The Digital Services Act prohibits ads targeting minors as well as ads based on users’ gender, ethnicity, or sexual orientation. It also bans misleading techniques that companies use to push people to do things they did not intend, such as signing up for services that are easy to choose but hard to turn down.
To show that they are making progress in limiting this practice, technology companies would need to perform annual risk assessments of their platforms.
Until now, regulators have not had access to the internal features of Google, Facebook and other popular services. However, under the new law, companies must be more transparent and provide information to regulators and independent researchers on content moderation efforts. For example, it could mean that YouTube has to pass on data about whether its recommendation algorithm has led users to more Russian propaganda than usual.
To enforce the new rules, the EU Commission is expected to hire more than 200 new staff. To pay for it, technology companies will be charged an oversight fee. Experts said the new rules are likely to set in motion copycat regulatory efforts by governments in other countries, while technology companies will also be under pressure to roll out the rules beyond EU borders.
“If Joe Biden stands on the podium and says ‘By golly, why do American consumers not deserve the same protection that Google and Facebook give European consumers’, it will be difficult for these companies to reject the use of the same. Rules” other places, Scott said.
But they are unlikely to do so voluntarily, said Zach Meyers, a senior researcher at the think tank Center for European Reform. There’s just too much money at stake if a company like Meta, which owns Facebook and Instagram, is limited in how it can target advertising to specific groups of users.
“The big tech companies will strongly oppose other countries adopting similar rules, and I can not imagine companies voluntarily applying these rules outside the EU,” Meyers said.
Last month, the EU reached a separate agreement on its Digital Markets Act, a law aimed at curbing the market power of technology giants and getting them to treat minor rivals fairly.
And in 2018, the EU’s General Data Protection Regulation set the global standard for data protection, even though it has been criticized for not being effective in changing the behavior of technology companies. Much of the problem is centered on the fact that a company’s leading privacy regulator is in the country where its European headquarters are located, which for most technology companies is Ireland.
Irish regulators have opened dozens of data privacy investigations, but have only handed down judgments to a handful. Critics say the problem is understaffing, but the Irish regulator says the cases are complex and time-consuming.
EU officials say they have learned from this experience and will make the Commission the enforcer of the Digital Services Act and the Digital Markets Act.
Also read: Activision Blizzard is holding union elections in May
Follow us on TwitterInstagram, LinkedIn, Facebook