Cambodia News Gazette

cambodianewsgazette

Legal Matters

Russia, Ukraine, and Social Media and Messaging Apps

Since Russia’s invasion of Ukraine on February 24, 2022, companies providing social media and messaging services have taken a wide array of steps to counter harmful disinformation, to label or block state-sponsored or state-affiliated media, and to introduce extra safety measures.

For over a decade, digital platforms have played an important and growing role in crises, conflicts, and war. As in Ethiopia, Iraq, Israel/Palestine, Libya, Myanmar, Sri Lanka, and Syria, among others, people use platforms to document human rights abuses in conflicts, condemn atrocities, appeal to the international community for action and crowdsource relief and assistance. Platforms are also spaces where governments and others spread disinformation, incite violence, coordinate actions, and recruit fighters. The war in Ukraine is no exception.

This Q&A examines what companies providing popular social media and messaging services have done during this crisis and whether that meets their responsibility to respect human rights. It explains what the companies failed to do in Ukraine prior to this war, and what they have frequently failed to do, or done poorly, in other crises around the world. And it presents what companies should do to protect human rights in crisis situations, including an investment in clear policies, content moderation, and transparency.

It does not address the role of tech companies whose services, hardware, software, and infrastructure enable people to access the internet. These companies also have human rights responsibilities, and their withdrawal from Russia exacerbates the risk of isolation from the global internet for the country’s residents.

What are the human rights responsibilities of companies that provide social media and messaging services?

Companies have a responsibility to respect human rights and remedy abuses under the UN Guiding Principles on Business and Human Rights. This requires them to avoid infringing on human rights and to take steps to address adverse human rights impacts that stem from their practices or operations. Actions that companies take should be in line with international human rights standards, conducted in a transparent and accountable way, and enforced in a consistent manner.

Did companies do enough to meet their human rights obligations in Ukraine before February 24, 2022?

As Human Rights Watch and many of its partners have documented for years, social media companies have chronically underinvested in responding to human rights challenges in countries around the world where people rely on their services. Ukraine is no exception.

Even before Russia’s occupation of Crimea and its support for the conflict in eastern Ukraine, the authorities have deployed elaborate, state-driven propaganda through which they have disseminated malicious disinformation, invented “facts” and lies, and wild exaggerations. These include baseless claims about Kyiv being overrun by “Nazis,” and existential threats to ethnic Russians. This has played a crucial role in escalating tensions in Ukraine and fanning the flames of the conflict since it started in 2014.

Since 2014, Ukraine has repeatedly urged companies to improve their efforts in Ukraine. The previous president reportedly urged Facebook to stop the Kremlin from spreading misinformation on the social network that was fomenting distrust in his (then) new administration and promoting support of Russia’s occupation of parts of Ukraine, including by posing a question to Mark Zuckerberg during a 2015 town hall meeting. In its 2021 submission to the UN special rapporteur on freedom of expression, Ukraine stated that “measures taken by the social media companies, the practices of blocking fake profiles and activities of fact-checkers were only partly effective,” and that the “effectiveness of the social media companies’ activities aimed at combating disinformation is difficult to estimate and would benefit from higher level of transparency.”

In September 2021, Digital Transformation Minister Mykhailo Fedorov held a meeting with Google officials in Silicon Valley, asking them to establish an office in Ukraine because, in his view, during military aggression it is necessary that YouTube conduct its content moderation locally instead of in Russia.

There may be good reasons for companies to not have in-country staff moderating content or acting as a local representative, for example to shield them from government pressure to comply with arbitrary censorship requests or threats of imprisonment. In addition, it is crucial for companies to hire content moderators who are not just fluent in local languages, but who are attuned to local context and are able to assess content in an unbiased and independent manner. Human Rights Watch wrote to Google on March 9 inquiring whether its content moderators for Ukraine are based in Russia and whether it has an office in Ukraine. Google has not responded at time of publication.

What steps have social media and messaging service companies taken since February 24?

Since February 24, companies providing social media and messaging services have taken many steps in response to the war in Ukraine, most of them aimed at countering harmful disinformation, adding labels to or blocking state-sponsored or state-affiliated media, or introducing extra safety measures. Some of these measures apply to either Ukraine or Russia, some apply in the EU only, and some apply globally. Some decisions were made in response to government requests, some in defiance of government requests, and others in response to public pressure, or at the companies’ own initiative.

Both the high volume of announcements and speed of these policy changes concerning a wide range of their services since February 24 are unique. That is why Human Rights Watch and others are closely monitoring how social media companies and popular messaging apps address the evolving situation, including how they respond to government requests and sanctions. Telegram remains the outlier. (See tables below.)

Blocking of Russian state-affiliated and state-sponsored media
Other actions taken against Russian state-affiliated and state-sponsored media
Other actions taken
At the same time, many of these measures that companies have introduced are not new. In other situations, companies have adopted similar measures after more sustained government or public pressure, but in a more limited manner (See section “Are social media and messaging companies meeting their human rights responsibilities in wars and crises globally?”).

For example, the company formerly known as Facebook has a set of “break glass” measures, which Nick Clegg, Facebook (now Meta) president of global affairs said “allow us to – if for a temporary period of time – effectively throw a blanket over a lot of content that would freely circulate on our platforms.” According to Clegg, this would allow the company to “play our role as responsible as we can to prevent that content, wittingly or otherwise, from aiding and abetting those who want to continue with the violence and civil strife that we’re seeing on the ground.” These measures include restricting the spread of live video on its platforms and reducing the likelihood that users will see content that its algorithms classify as potential misinformation. The company reportedly put these measures in place in what it calls “at-risk countries” such as Myanmar, Ethiopia, and Sri Lanka, and in the US ahead of the 2020 presidential election.

Platforms have also removed state-affiliated media in the past in response to sanctions and terrorist designations. For example, in 2019 Instagram reportedly removed the accounts of government-owned and Islamic Revolutionary Guard Corps-affiliated media agencies such as Tasnim News Agency, the Iran Newspaper, and Jamaran News. A company spokesperson for Facebook, which owns Instagram, said at the time, “We operate under US sanctions laws, including those related to the US government’s designation of the IRGC and its leadership.”

Google and Facebook paused political ads just ahead of election day 2020 in the US, while Twitter rolled out new features to add “friction” to the spread of disinformation. Platforms have also formed special teams to respond to crises and introduced or directed users to special account security measures in response to emergency situations, such as after the fall of the Afghanistan government in August 2021.

While some of the platforms’ actions around the war in Ukraine resemble those in other situations, some are also inconsistent with their policies elsewhere. For example, Meta announced on March 11 that Facebook and Instagram users in Ukraine are temporarily allowed to call for violence against Russian armed forces in Ukraine in the context of the invasion. However, no such policy has been announced for Syria, for example, where Russia has been fighting in partnership with Syrian armed forces since September 2015, and where Human Rights Watch has documented serious violations by Russian forces that include apparent war crimes and may amount to crimes against humanity.

Do the steps taken by social media and messaging apps in Ukraine meet their human rights responsibilities?

It is too early to assess the adequacy of steps by tech companies since February 24 against their human rights responsibilities. Some reports indicate that their steps to counter harmful disinformation and misinformation are falling short.

None of the major social media networks and messaging platforms have been fully transparent about what resources they direct toward user safety and content moderation in Ukraine. Human Rights Watch wrote to Google, Meta, and Twitter on March 9, and to Telegram and TikTok on March 10 to inquire about how many Ukrainian speakers they have moderating content, whether they have employed staff based in Ukraine that is tasked with moderating content, and how they ensure that content moderators work in an unbiased and safe manner.

Meta shared a link to a newsroom post, with a consolidated set of updates on its approach and actions, which include establishing a special operations center staffed by experts, including native Russian and Ukrainian speakers, who are monitoring the platform around the clock; new safety features; and steps to fight the spread of misinformation and provide more transparency and restrictions around state-controlled media outlets. It also shared links to a summary of its “At Risk Country” work and investments. The company did not say whether Ukraine is considered an “At Risk Country” or respond to our specific questions.

TikTok shared updates to its policies on state-controlled media, that it would suspend livestreams and uploading of new content in Russia to comply with Russia’s new “fake news” law and that it is promoting digital literacy and safety tools, as detailed on its website. The company also said that it had paused all advertising in Russia and Ukraine. TikTok said it would not disclose information about its operations and employees, if any, in Ukraine for the protection of the broader team, and that the company does not provide the exact locations of its moderation sites or the number of content moderators for the platform. However, it said that TikTok’s content moderation teams speak more than 60 languages and dialects, including Russian and Ukrainian.

At the time of publication, Human Rights Watch had not received responses from Google, Telegram, or Twitter, but will update this document to reflect any responses received.

To fully assess the effectiveness of company responses in terms of respecting users’ rights and mitigating human rights risks, as well as the human rights impact of both action and inaction, there is an urgent need for the companies to provide access to data to independent researchers, including those in the fields of human rights disinformation, hate speech, and incitement to violence, among others.

Many actions that social media companies have taken during the war in Ukraine, such as account takedowns, geo-blocking state-affiliated media channels, content removal, and demoting content, have implications for freedom of expression. Companies need to be able to demonstrate how their actions fit within a human rights framework—specifically, whether restrictions on freedom of expression are necessary and proportionate to a legitimate aim, and whether they are procedurally fair.

Furthermore, it is important to assess whether these actions were taken as a result of clear, established, transparent processes for responding to government requests or enforcing their policies, or a result of political pressure, plus the potential unintended consequences of these actions.

What steps have Ukraine and Russia taken with regard to social media companies and messaging services?

Ukraine has taken a number of actions to combat disinformation over the past few years, including restricting access to Russian TV channels and social media platforms.

In 2017, former president Petro Poroshenko banned several Russian-owned internet firms, including VKontakte (VK) and Odnoklassniki, and at least 19 Russian news sites. In 2021, President Volodymyr Zelensky signed a Ukrainian security council decree imposing sanctions for five years on eight pro-Russian media and TV companies for allegedly “financing terrorism.” At the time, Human Rights Watch said that “Ukraine’s government has every right to address disinformation and propaganda with serious measures. Yet there is no denying that in shutting down broadcasts, the sanctions decrease media pluralism and should be held up to close scrutiny.” In a 2021 submission to the UN special rapporteur on freedom of expression, Ukraine described its bans as “ineffective” as the channels could still operate on social media.

Since February 24, Digital Transformation Minister Fedorov has issued a large number of public requests to tech companies, including those that provide internet infrastructure. Some requests, including those to Apple, Google, Meta, Twitter, YouTube, Microsoft, PayPal, Sony, and Oracle, asked companies to cease operations in the Russian Federation and, in some cases, to block content to Russia-affiliated media globally.

Russia has escalated its assault on online (and offline) expression over the past year, and especially since its invasion of Ukraine. In 2016, authorities blocked LinkedIn for noncompliance with Russia’s counterterrorism legislation, commonly referred to as the “Yarovaya” law. Telegram was banned in Russia in 2018, after the company refused to hand over user data. The ban was reversed in 2021.

On February 25, Roskomnadzor, the Russian internet regulator, announced that it would partially restrict access to Facebook in Russia, in retaliation for Meta blocking four Russian state media accounts. Meta’s Clegg tweeted that on February 24, “Russian authorities ordered [Meta] to stop independent fact-checking and labelling of content posted on Facebook” by those state-owned media. After Meta refused to comply, the Russian government announced that it would restrict access to Meta services. On March 4, authorities fully blocked Facebook. Online monitoring groups confirmed issues with accessing Facebook.

On February 26, Twitter announced that Russian authorities had restricted access to its services in Russia. Reports by online monitoring groups confirm that some Twitter users in Russia experienced serious interruptions in using the platform.

On March 11, Roskomnadzor announced the full blocking of Instagram in Russia, to go into effect March 14. The blocking came after Meta introduced exceptions to its violent speech policies, allowing calls for violence against Russian armed forces in Ukraine. The Prosecutor General’s Office filed a lawsuit against Meta in court, seeking to ban it as “extremist.” The Investigative Committee, Russia’s criminal investigation service, opened a criminal investigation against Meta’s employees.

The level of control and censorship that Russia’s measures seek to achieve deprives freedom of expression and the right of access to information of meaningful content and cannot be justified under international law even in times of war.

Are social media and messaging companies meeting their human rights responsibilities in wars and crises globally?

In recent years, some social media companies have reacted to emergency or conflict situations by taking steps to reduce the spread of potential incitement to violence, hate speech, and disinformation, removing accounts that violated their policies, temporarily pausing advertising, and forming special operation centers to monitor their platforms and respond to emerging issues.

At the same time, the platforms are not responding adequately to many conflicts or fragile situations, and in some places their failure to act has facilitated human rights abuses.

Facebook’s own internal research, for example, shows that the company’s language capacities are inadequate to address the proliferation of global misinformation and incitement to violence, as revealed by a whistleblower disclosure that cites examples from Afghanistan, Cambodia, India, Sri Lanka, Israel/Palestine, and Arabic speaking countries. In response to the whistleblower’s allegations that internal research showed that the company is not doing enough to eradicate hate, misinformation, and conspiracy, a company representative said, “Every day our teams have to balance protecting the right of billions of people to express themselves openly with the need to keep our platform a safe and positive place. We continue to make significant improvements to tackle the spread of misinformation and harmful content. To suggest we encourage bad content and do nothing is just not true.”

In response to a Human Rights Watch question about the whistleblower’s allegations, Meta provided links to its approach to At Risk Countries.

When companies have taken steps to respond to emergencies, crises and conflicts, they haven’t always been sufficient, and they sometimes have unintended consequences. The way in which the steps have been implemented continues to raise concerns about transparency and accountability. For example, platforms understandably restrict content that unlawfully incites or promotes violence, but such content, especially during crises and conflicts, can also have potential evidentiary value that investigators, researchers, journalists, and victims can use to document violations and help hold those responsible on all sides to account for serious crimes. Research by Human Rights Watch, as well as by the Berkeley Human Rights Center, Mnemonic, and WITNESS, has shown that potential evidence of serious crimes is disappearing, sometimes without anyone’s knowledge.

Digital rights organizations, including from Ukraine, sent a letter to the Telegram CEO and co-founder Pavel Durov in December 2021 urging the platform to address a range of human rights problems experienced on the platform, including around user safety and content moderation. The letter cites a lack of policy and arbitrary decision-making around content moderation and the circumstances under which the platform would share data with governments. The organizations also call for effective mechanisms to report and remedy potential abuses from other Telegram users. Telegram has not responded.

Transparency and accountability in decision making from platforms is essential because their actions can have negative effects even when not intended, can disproportionately affect certain people or groups, and can set dangerous precedents that governments may seek to exploit. Clarity from a company on how it reached a certain decision, including the human rights justification for the action, can help to mitigate such risks. It also facilitates remedying harm to individual users and at a more general policy level.

One example of a lack of transparency and accountability was Facebook and Instagram’s responses to the hostilities that broke out in Israel and Palestine in May 2021. The company appears to have taken steps consistent with the “break glass” measures described above – ostensibly aimed at slowing the spread of violent content, though it never stated this explicitly. Research by Human Rights Watch and other human rights groups show, however, that Facebook removed or suppressed content from activists, including about human rights abuses.

The company attributed many of the wrongful takedowns and content suppression to “technical glitches” – a vague explanation that inhibits efforts to hold the company accountable for its actions and to remedy harm. Following pressure from civil society and a recommendation by the Facebook Oversight Board that Facebook conduct a thorough examination to determine whether its content moderation had been applied without bias, Facebook commissioned a human rights impact assessment, which is still underway.

Human Rights Watch wrote to Facebook in June 2021 to seek the company’s comment and to inquire about temporary measures and longstanding practices around the moderation of content related to Israel and Palestine. The company responded by acknowledging that it had already apologized for “the impact these actions have had on their community in Israel and Palestine and on those speaking about Palestinian matters globally,” and provided further information on its policies and practices. However, the company did not answer any of the specific questions from Human Rights Watch or meaningfully address any of the issues raised.

In Myanmar, Facebook’s late and insufficient response leading to the ethnic cleansing campaign against the Rohingya in 2017 also warrants closer scrutiny. In August 2018, a UN report concluded that Myanmar’s security forces committed abuses against the ethnic Rohingya population that amounted to crimes against humanity, war crimes, and possible genocide. It also found that the role of social media in the lead up to atrocities against the Rohingya was “significant.” Specifically, the UN report found that “Facebook has been a useful instrument for those seeking to spread hate in a context where, for most users, Facebook is the Internet.”

Soon after the report was released, Facebook removed 18 accounts and 52 pages associated with the Myanmar military, including the page of its commander-in-chief, Sr. Gen. Min Aung Hlaing. This step came years after civil society organizations in Myanmar had flagged for the company clear examples of its tools being used to incite violence, and of the inadequate response from Facebook. Even after Facebook removed the 18 accounts, hired Burmese-speaking content moderators to monitor the platform for hate speech, and developed algorithms and AI to detect “hatred,” civil society in Myanmar continued to report seeing incitement to violence and hate speech on the platform.

Following the February 1, 2021 military coup in Myanmar, in a tacit acknowledgement that it could do more, Facebook and Instagram announced a ban on remaining Myanmar military and military-controlled state and media entities from Facebook and Instagram, as well as on ads from military-linked commercial entities. It later announced that it would ban military-controlled businesses from the platform. Facebook’s decision to disable the official news page of the Myanmar military drew some criticism because it was one of the few official means of receiving communication directly from the military.

What should social media and messaging platforms do to better respect human rights in conflicts and emergencies?

While Russia’s invasion of Ukraine may present an unprecedented challenge in some ways, these platforms have had to deal with conflict playing out on and through their platforms for years, including in Ukraine. It’s past time they take their responsibility seriously.

Under the UN Guiding Principles on Business and Human Rights, companies should conduct human rights due diligence that includes identifying, preventing, ceasing, mitigating, remediating, and accounting for potential and/or actual adverse impacts on human rights. Human rights due diligence is not a one-time activity, but an ongoing process, which should enable companies to periodically evaluate new risks as they emerge.

As a first and fundamental step, companies need to address their chronic underinvestment in user safety outside of North America and Western Europe. This includes publishing their terms of service and community guidelines in relevant languages, investing in responsible moderation practices both human and automated, and being transparent about where they are allocating resources and why, among other steps.

Companies should align their policies with international human rights standards, carry out rigorous human rights impact assessments for product and policy development, and engage in ongoing assessment and reassessment and consult with civil society in a meaningful way.

Companies should also radically increase transparency and accountability in their content moderation practices, as outlined by the Santa Clara Principles, which were created by civil society groups to set baseline standards for platforms’ content moderation practices informed by due process and human rights. Among other measures, the principles call for human rights and due process considerations to be integrated at all stages of the content moderation process; understandable rules and policies; cultural competence so that those making moderation and appeal decisions understand the language, culture, and political and social context of the posts they are moderating; and integrity and explainability of moderation systems, including both automated and non-automated components, to ensure that they work reliably and effectively.

Companies should also strengthen their policies and their enforcement. Even though many platforms have policies to counter harmful disinformation, to label state or government controlled or affiliated media, and to counter platform manipulation, they still find themselves unprepared when a conflict situation arises and then introduce new policies or measures on the fly. Why does this continue to happen?

Social media platforms and other content hosts that choose to actively remove content should take care to preserve and archive removed content that may have evidentiary value of human rights abuses, including content identified by human rights organizations, while ensuring the privacy and security of vulnerable individuals associated with that content. There is also an urgent need to provide access to data to independent researchers, including those in the fields of human rights disinformation, hate speech, and incitement to violence among others, to assess the extent to which platforms are effectively mitigating or contributing to the human rights risks facilitated by their platforms and meeting their human rights responsibilities.

More fundamentally, it is crucial to address the underlying business model upon which dominant platforms are based. This model relies on pervasive tracking and profiling of users that not only intrudes on people’s privacy, but feeds algorithms that promote and amplify divisive and sensationalist content. Studies show that such content earns more engagement and, in turn, profit for companies. The pervasive surveillance upon which this model is built is fundamentally incompatible with human rights, which is why surveillance-based advertising should be banned.

Source: Human Rights Watch

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *