Days after the Israel-Hamas war erupted last weekend, social media platforms like Meta, TikTok, and X (formerly Twitter) were warned by a top European regulator to remain vigilant about disinformation and violent posts related to the conflict.
European Commissioner for the internal market, Thierry Breton, sent messages to these platforms, stressing the consequences that failure to comply with the region’s rules about illegal online posts under the Digital Services Act could have on their businesses.
“I remind you that following the opening of a potential investigation and a finding of non-compliance, penalties can be imposed,” wrote Breton to X owner Elon Musk, as an example.
This warning goes beyond what would typically be possible in the U.S., where the First Amendment protects various types of abhorrent speech and prevents the government from stifling it. In a current legal battle brought by Republican state attorneys general, the U.S. government’s efforts to moderate misinformation about elections and Covid-19 are being challenged.
In this case, the attorneys general argued that the Biden administration was overly coercive in its suggestions to social media companies to remove such posts. Last month, an appeals court ruled that the White House, Surgeon General’s office, and Federal Bureau of Investigation likely violated the First Amendment by enforcing content moderation. With the Supreme Court yet to weigh in, the Biden administration awaits the Supreme Court’s decision on whether the restrictions on its contact with online platforms, as granted by the lower court, will be upheld.
Based on this case, David Greene, Civil Liberties Director at the Electronic Frontier Foundation, commented, “I don’t think the U.S. government could constitutionally send a letter like that,” referring to Breton’s messages.
Kevin Goldberg, a First Amendment specialist at the Freedom Forum, stated that the U.S. does not have a legal definition of hate speech or disinformation as they are not punishable under the constitution. Goldberg explained, “What we do have are very narrow exemptions from the First Amendment for things that may involve what people identify as hate speech or misinformation.” Certain statements considered hate speech may fall under a First Amendment exemption for “incitement to imminent lawless violence.” Similarly, some forms of misinformation may be punished when they violate laws about fraud or defamation.
However, the First Amendment would likely prevent certain provisions of the Digital Services Act from being enforceable in the U.S.
“We cannot have government officials pressuring social media platforms and telling them to take action in areas like the EU regulators are doing currently in this Israel-Hamas conflict,” Goldberg said, highlighting the limits on government coercion as a form of regulation.
Christoph Schmon, international policy director at EFF, viewed Breton’s warnings as a sign that the European Commission is closely monitoring the situation.
Under the Digital Services Act, large online platforms are required to have robust procedures for removing hate speech and disinformation, while considering concerns about free expression. Non-compliance with the rules can result in fines of up to 6% of global annual revenues.
Meanwhile, in the U.S., government penalties have their risks.
“Governments need to be clear that their requests are simply requests and not accompanied by threats of enforcement actions or penalties,” said Greene.
A series of letters from New York Attorney General Letitia James to various social media sites exemplifies how U.S. officials may tread this line.
James requested information from Google, Meta, X, TikTok, Reddit, and Rumble on how they identify and remove calls for violence and terrorist acts, citing “reports of growing antisemitism and Islamophobia” following “the horrific terrorist attacks in Israel.”
However, unlike Breton’s letters, James’ letters do not include threats of penalties for failing to remove such posts.
The exact impact of these new rules and warnings from Europe on how tech platforms approach content moderation in both the region and worldwide remains unclear.
Goldberg pointed out that social media companies have already faced restrictions on the types of speech they can host in different countries. Consequently, they may choose to confine any new policies to Europe. However, the tech industry has previously applied policies like the EU’s General Data Privacy Regulation (GDPR) more broadly.
Goldberg believes it is reasonable for individual users to adjust their settings to exclude certain types of posts they prefer not to see. However, he emphasized that such decisions should be made by each individual user.
Considering the complex history of the Middle East, Goldberg stated that people “should have access to as much content as they want and need to figure it out for themselves, not the content that the government thinks is appropriate for them to know and not know.”
WATCH: EU’s Digital Services Act will present the biggest threat to Twitter, think tank says
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.