Authors: Sammy Camy & Landon Harris @ Harvard Cyberlaw Clinic
Content Warning: The following scenarios are hypothetical but still may contain content triggering to some individuals including discussion about discrimination including: racism, sexism, homophobia, transphobia, police brutality, xenophobia, general violence, abuse.
You learn your employer is anti-trans when they decide to fire you after discovering that you posted about your gender affirmation surgery on your public Instagram account.
Not OK – Social media posts cannot be used as justification to fire an employee when that firing is based on the employee having a protected class characteristic like race, color, sex, religion, age, or disability. If an employee can prove an employer fired them based on sex, or one of the other protected characteristics, the employer has violated Title VII of the Civil Rights Act of 1964, and the employee may have legal recourse.
Resources & Citations
⚖️ Employment Relationships
⚖️ Title VII
In general, whether an employee can be fired by their employer for something posted on social media depends on the specifics of that employee’s contract with their employer.
Some employee - employer contracts stipulate that an employee can only be fired for cause (e.g. poor employee performance, employee misconduct, economic necessity, etc...) and/or specifically identify circumstances in which an employee may be fired. If an employee’s social media content falls outside of one of these specified circumstances, then the employee most likely can’t be fired for a social media post.
However, most employment contracts state that employment is “at-will”. Employment relationships are also presumed to be “at-will” in most U.S. states, meaning unless your contract specifically states otherwise, your employment will be automatically assumed to be “at-will”. “At-will” employment means both the employee and employer maintain a working relationship at their own discretion. In other words, the employee is free to quit at any time and the employer can fire the employee for any reason, so long as that reason is not prohibited by law. Because an “at-will” employee can be dismissed by their employer for any legal reason, if an employer determines that an employee’s social media content is grounds for dismissal, in most cases they are free to fire the employee for that reason.
However, one illegal reason that employers are prohibited from using as a basis to fire employees (no matter what contract they’re on) is that employee’s demographic characteristics, such as race, sex, age, and religious affiliation. Title VII of the Civil Rights Act of 1964 (Title VII) makes it unlawful for employers to discriminate against someone since race, color, national origin, sex (including pregnancy, sexual orientation, and gender identity) or religion.So even if you are an at-will employee, if you can prove your employer has disciplined you based on a social media post related to your protected characteristics (such as gender identity), you may have legal recourse.
Resources & Citations
⚖️ Employment Relationships
⚖️ Title VII
You get into a disagreement with someone online. You say the points they’re making are “preposterous” and claim “nobody should listen to them.” You do not physically attack them in real life, threaten them with violence, or use any racial slurs. Your posts get removed.
Not OK to Remove – As long as you follow the social media site's Community Guidelines, disagreements between individuals are allowed and your content should not be removed.
Resources & Citations
⚖️ Facebook Community Standards
⚖️ Instagram Community Guidelines
⚖️ Twitter Rules
⚖️ TikTok Community Guidelines
⚖️ Reddit Rules
As long as you follow the social media site's Community Guidelines and remain critical of a person’s viewpoint without engaging in personal attacks, disagreements between individuals are allowed and your content should not be removed. However, when a disagreement rises to the level of personal attacks, you might want to consult the Community Guidelines of the platform you are posting on. Content that is commonly prohibited by social media sites includes hate speech, threats, encouragement of violence, and posts that repeatedly target private individuals to degrade or shame them.
Resources & Citations
⚖️ Facebook Community Standards
⚖️ Instagram Community Guidelines
⚖️ Twitter Rules
⚖️ TikTok Community Guidelines
⚖️ Reddit Rules
You get into a disagreement with someone online. You repeatedly call them a slur based on their religious affiliation, you claim that they are too weak to beat you in a fight in real life and threaten to find where they live. Your post gets taken down.
OK to Remove – Most popular social media platforms moderate content that seriously threatens violence, wishes a person experience physical harm, or intends to incite others to inflict harm against that person. Additionally, content that targets private individuals with statements intended to degrade or shame them may be considered bullying or harassment and may be removed accordingly. Moreover, when threats or promotion of violence are on the basis of race, ethnicity, national origin, sexual orientation, gender identity, religious affiliation, or disability, they could be considered hate speech.
Resources & Citations
Bullying & Harassment Policies
⚖️ Facebook/Instagram
⚖️ Twitter
⚖️ TikTok
⚖️ Reddit
Most popular social media platforms moderate content that seriously threatens violence, wishes a person experience physical harm, or intends to incite others to inflict harm against that person. Additionally, content that targets private individuals with statements meant to degrade or shame them maybe considered bullying or harassment, and may be removed accordingly. Content that threatens, promotes violence against, or otherwise degrades a person on the basis of race, ethnicity, national origin, sexual orientation, gender identity, religious affiliation, or disability could also be considered hate speech, and will likely be removed.
Platforms often distinguish between content directed towards private individuals and those directed towards public figures. More leeway is typically granted to posts that are highly critical of people who are featured in the news or have large public audiences. But even in the context of public figures, particularly severe or abusive personal attacks may violate a platform’s content guidelines. For example, for public figures, Facebook will “remove attacks that are severe as well as certain attacks where the public figure is directly tagged in the post or comment,” and extends even greater protection to private individuals, for whom it will generally “remove content that's meant to degrade or shame.”
For more specific information on what is and is not allowed on different social media platforms, see content guidelines for bullying and harassment listed below:
Resources & Citations
Bullying & Harassment Policies
⚖️ Facebook/Instagram
⚖️ Twitter
⚖️ TikTok
⚖️ Reddit
You post an anti-racist statement on your Facebook Profile that says, “@white people, I’m tired of y’all! All these Karens need to confront their racist family and friends and contribute to combatting white supremacy and systemic racism.” Your post gets taken down.
Grey Area –Facebook’s Community Standards prohibit hate speech, which is defined as a direct attack against people– rather than concepts or institutions – on the basis of protected characteristics like race, ethnicity, national origin. If another user reports this post for hate speech, Facebook may remove it because they think the post criticizes white people as a race and makes a generalization about negative character traits. However, in 2020, Facebook changed its algorithm to deprioritize policing comments directed at “Whites.” Facebook treats this sort of content as less likely to be harmful than hateful language that is directed against racial minorities, members of the LGBTQ+ community, and certain religious affiliations.
Resources & Citations
⚖️ Haimson Research Article, Disproportionate Removals and Differing Content Moderation Experiences for Conservative, Transgender, and Black Social Media Users: Marginalization and Moderation Gray Areas
⚖️ USA Today, Facebook while black: Users call it getting 'Zucked,' say talking about racism is censored as hate speech
⚖️ Washington Post, Facebook to start policing anti-Black hate speech more aggressively than anti-White comments, documents show
Facebook’s Community Standards prohibit hate speech, which is defined as a direct attack against people –rather than concepts or institutions– on the basis of protected characteristics like race, ethnicity, national origin. If another user reports this post for hate speech, Facebook may remove it because they think the post criticizes white people as a race and makes a generalization about negative character traits.
However, this is a grey area because what is and is not considered hate speech can depend on the poster’s identity and the identity of those described in the post. A black person calling on white people to serve as allies in the fight against racism is an important part of anti-racism discourse and algorithms may struggle to capture this context when censoring content that may pertain to hate speech.
In 2020, Facebook changed its algorithm to deprioritize policing contemptuous comments about “Whites.” The company’s technology now treats this sort of content as less likely to be harmful and they are no longer automatically deleted by the company’s algorithms. The algorithm now prioritizes hateful language that is considered the “worst of the worst,” including slurs directed at Black, Muslim, people of more than one race, the LGBTQ+, and Jewish communities. Facebook’s efforts to have their algorithm have resulted in minimal progress, however, as Black users continue to report their posts being removed at a greater frequency than their White counterparts.
Resources & Citations
⚖️ Haimson Research Article, Disproportionate Removals and Differing Content Moderation Experiences for Conservative, Transgender, and Black Social Media Users: Marginalization and Moderation Gray Areas
⚖️ USA Today, Facebook while black: Users call it getting 'Zucked,' say talking about racism is censored as hate speech
⚖️ Washington Post, Facebook to start policing anti-Black hate speech more aggressively than anti-White comments, documents show
A public school student, while off-campus, sent a Snapchat message to 250 friends saying that she was really #@!$%!!! mad about what happened on the season finale of her favorite TV show. The school learns about this and expels her for using profanity.
Not OK to Remove – Public schools cannot punish their students for things posted on social media aslong as they are posted off campus,outside of school hours, and do not relate to school activities.
Resources & Citations
⚖️ ACLU: Know Your Students' Rights
⚖️ Mahanoy Area Sch. Dist. v. B.L.,141 S. Ct. 2038 (2021)
⚖️ Harvard Law Review: Mahanoy Area School District v. B. L.
This hypothetical question is modeled after the Supreme Court case called Mahanoy Area School District v. B.L., 141 S. Ct. 2038 (2021). This case is about a public-school student’s free speech rights while off campus. B.L., as part of tryouts for her public high school’s varsity cheerleading team, had to agree to a set of rules that included requirements like the prohibition of the use of “foul language and inappropriate gestures” when representing the school and ban on placing “any negative information regarding cheerleading, cheerleaders, or coaches . . . on the internet.” When B.L. did not receive a spot on the varsity team, she took it to her Snapchat to voice her frustrations via two photos on her Snapchat story. The first image was a selfie with her and a friend posing with their middle fingers raised with the caption “[F]uck school fuck softball fuck cheer fuck everything.” The second image included a caption that detailed her disappointment about her rejection to the varsity team. One of B.L.’s Snapchat “friends,” was the daughter of one of the cheerleading coaches and the daughter brought the photos to the coaches’ attention.
The team’s coaches, school’s athletic director, principal, superintendent, and school board determined that, because B.L.’s posts used profanity in connection with cheerleading, B.L. had violated the team’s rules. The coaches felt the need to enforce the team’s rules against B.L. to “avoid chaos.” As a result, B.L. was suspended from the cheerleading team for the upcoming year. B.L. and her family filed suit.
The Supreme Court sided with B.L. and held that the First Amendment limits a public school’s ability to regulate off-campus student speech. Similar to how public schools may only regulate student speech and conduct on campus if the student’s speech “cause[s] material and substantial disruption at school,” the Court held that public schools only have special interests in regulating off-campus student speech in situations implicating the school’s regulatory interests. The Court provided three features of off-campus speech that will typically diminish the strength of a school’s authority to regulate off-campus speech as opposed to on-campus speech: (1) off-campus speech is typically “within the zone of parental, rather than school-related, responsibility,” and the school rarely stands in the place of a parent when a student speaks off campus; (2) because off- and on-campus speech together encompass all speech uttered by a student, courts should be especially skeptical of a school’s efforts to regulate off-campus speech without a meaningful justification; and (3) schools have a continued interest in protecting unpopular expression off campus as America’s public schools function as “the nurseries of democracy.”
While Mahanoy Area School District v. B.L tells us about the instances in which public school officials can regulate off-campus student speech, it is unclear how far this holding extends to other academic contexts. The modern educational system did not exist when the First Amendment was originally drafted and case law about off-campus speech is inconclusive. The First Amendment applies to students studying at public schools at all levels (primary, secondary, and postsecondary) because public school officials are considered governmental actors. However, the First Amendment does not apply to students attending private schools because private schools are not considered governmental actors.
Resources & Citations
⚖️ https://www.aclu.org/know-your-rights/students-rights/
⚖️ Mahanoy Area Sch. Dist. v. B.L.,141 S. Ct. 2038 (2021)
⚖️ https://harvardlawreview.org/2021/11/mahanoy-v-b-l/
You see in the news that a local public official has voted against a bill you supported and decide to post on the public official’s social media profile to voice your opposition. As a result, the public official blocks you from accessing their profile and deletes your comments on their profile.
Grey Area – A public official’s obligations on social media differ depending on whether they’re speaking as private individuals or government actors.
If a public official is speaking as a private individual on their personal account, the First Amendment protects their right to limit their audience or curate the messages on their personal account, just like any other member of the public.
If a public official is acting on behalf of the government via an official government account where they invite comments, then they are subject to the limits the First Amendment imposes on government actors. Thus, if a public official, via an official governmental account, creates a public forum for discourse, the public official cannot block or exclude someone from engaging in that discourse based on the viewpoints expressed. However, a public official can limit other kinds of interactions (e.g. posting personal threats or profane language).
Resources & Citations
⚖️ ACLU: When Public Officials Censor You On Social Media
⚖️ ACLU: Court Rules Public Officials Can’t Block Critics on Facebook
A public official’s obligations on social media differ depending on whether they’re speaking as private individuals or government actors. If a public official is speaking as a private individual on their personal account, the First Amendment protects their right to limit their audience or curate the messages on their personal account, just like any other member of the public.
If a public official is acting on behalf of the government via an official government account where they invite comments, then they are subject to the limits the First Amendment imposes on government actors. Thus, if a public official, via an official governmental account, creates a public forum for discourse, the public official cannot block or exclude someone from engaging in that discourse based on the viewpoints expressed.
The boundaries between the two rules are not always cut-and-dry. If a public official invites comments on a social media page concerning public matters or otherwise intentionally designates it as a space for public discussion, the social media page may become a “limited” or “designated” public. Where public forums are involved, public officials cannot exclude people from accessing the page just because the official disagrees with them. Thus, the answer to this question depends on several factors, like the kind of profile where the post was uploaded (private or official government) and whether the public was invited to comment.
Finally, note that while an official speaking as a government actor can’t limit interactions based on viewpoint, they can limit other kinds of interactions (e.g. posting personal threats or profane language).
Resources & Citations
⚖️ ACLU: When Public Officials Censor You On Social Media
⚖️ ACLU: Court Rules Public Officials Can’t Block Critics on Facebook
You share a live video of someone being accosted and then shot by the police. Your live video is taken down for violating community guidelines.
Grey Area – Social media platforms generally restrict or impose greater restrictions on media that depicts death, violence, or serious physical injury in graphic detail. However, many platforms will allow an exception to their graphic content prohibition for content that is education or documents real-world events.
For instance, Facebook generally prohibits content that glorifies violence or celebrates the suffering or humiliation of others; however, they do allow some exceptions for graphic content that seek to raise awareness of issues like police brutality. In those cases, Facebook will include a warning screen to alert people that the content may be disturbing and will limit the ability to view the content to adults, age 18 and older.
See the Social Media Guidelines page regarding graphic content for more information.
Resources & Citations
⚖️ Facebook: Violent and Graphic Content
⚖️ Twitter: Sensitive Media Policy
⚖️ TikTok: Violent and graphic content
Whether it is OK for a social media platform to take down your video depicting police brutality is a grey area because each social media platform has differing guidelines surrounding graphic content.
For instance, Facebook generally prohibits content that glorifies violence or celebrates the suffering or humiliation of others; however they do allow some exceptions for graphic content that seek to raise awareness of issues like police brutality. In those cases, Facebook will include a warning screen to alert people that the content may be disturbing and will limit the ability to view the content to adults, age 18 and older.
On the other hand, TikTok prohibits all content that is gratuitously shocking, graphic, sadistic, or gruesome or that promotes, normalizes, or glorifies extreme violence or suffering. However, in certain situations TikTok may not remove some types of this content such as depictions of graphic deaths, accidents, or fighting when real world events are being documented and it is in the public interest. Since this content isn't appropriate for all audiences, it will not be eligible for recommendation to TikTok’s “For You Feed.”
Twitter also prohibits content that depicts death, violence, medical procedures, or serious physical injury in graphic detail. However, exceptions may be made for documentary or educational content. Users can’t post or target people with unsolicited graphic content nor can they include graphic content in live video, profile header, or List banner images.
Given social media platforms are private companies, they have leeway to determine their content moderation policies and control their enforcement. Whether such a video can be taken down depends on the specific social media platform’s policy. See the Social Media Guidelines page [insert hyperlink] regarding graphic content for more information.
Resources & Citations
⚖️ Facebook: Violent and Graphic Content
⚖️ Twitter: Sensitive Media Policy
⚖️ TikTok: Violent and graphic content
You are filling out an immigration or foreign travel form for the US Department of Homeland Security and are asked to provide a list of your social media handles on twenty platforms, including Facebook, Instagram, Twitter, and YouTube.
Not OK – In 2021, the White House office that reviews federal regulations rejected the U.S. Department of Homeland Security’s proposal to collect social media handles on immigration and travel forms. However, it is important to note that the State Department can and does request social media handles on its visa applications.
Resources & Citations
⚖️ Brennan Center for Justice: White House Office Rejects DHS Proposal to Collect Social Media Data on Travel and Immigration Forms
The U.S. Department of HomelandSecurity’s (DHS) immigration or travel form is a travel document that, once completed, enables United States citizens or visitors to freely cross international borders. A wide range of individuals may submit one of these forms, including people eligible for short trips to the U.S. without a visa, those seeking asylum or refugee status, and permanent U.S. residents seeking to become citizens. In 2020, DHS submitted a proposal to collect social media identifiers on these travel and immigration forms, which would have required about 33 million people a year to register every social media handle they have used over the past five years on 20 different platforms, including Facebook, Twitter, and YouTube.
The following year, the White House rejected the DHS’ proposal, concluding that DHS didn’t “adequately [demonstrate] the practical utility of collecting this information” and noted that the Muslim ban, which ordered the proposal, had been repealed. Following this decision, the Biden administration began a review of the utility of social media identifiers in the screening and vetted process for those seeking immigrant and nonimmigrant entry to the U.S. and whether it meaningfully improved screening and vetting. The results of this report are forthcoming.
While DHS’ social media proposal was rejected, the Department of State (DOS) is still permitted to ask for social media identifiers from the roughly 14 million people a year who fill out visa applications. Like the DHS proposal, the DOS’ policy was put forth in connection to the Muslim ban and was justified with practically identical supporting documentation as the DHS proposal.
In sum, if you are filling out a Department of Homeland Security travel form to seek asylum, citizenship, or in connection to a short trip to the U.S. that doesn’t require a visa, you do not need to provide your social media information. If you are filling out a visa application with the Department of State, you will be asked to disclose your social media information.
Resources & Citations
⚖️ Brennan Center for Justice: White House Office Rejects DHS Proposal to Collect Social Media Data on Travel and Immigration Forms