YouTube has always had a set of Community Guidelines that outline what type of content isn’t allowed on YouTube. These policies apply to all types of content on our platform, including videos, comments, links, and thumbnails. Our Community Guidelines are a key part of our broader suite of policies and are regularly updated in consultation with outside experts and YouTube creators to keep pace with emerging challenges.
We enforce these Community Guidelines using a combination of human reviewers and machine learning, and apply them to everyone equally—regardless of the subject or the creator’s background, political viewpoint, position, or affiliation.
Our policies aim to make YouTube a safer community while still giving creators the freedom to share a broad range of experiences and perspectives.
You'll find a full list of our Community Guidelines below:
In addition to Community Guidelines, creators who want to monetize content on YouTube must comply with Monetization Policies.
Each of our policies is carefully thought through so they are consistent, well-informed, and can be applied to content from around the world. They’re developed in consultation with a wide range of external industry and policy experts, as well as YouTube Creators. New policies go through multiple rounds of testing before they go live to ensure our global team of content reviewers can apply them accurately and consistently.
This work is never finished, and we are always evaluating our policies to understand how we can better strike a balance between keeping the YouTube community protected and giving everyone a voice.
500 hours of video are uploaded to YouTube every minute. That’s a lot of content, which is why our teams come together to make sure that what you see on our platform follows our Community Guidelines. To do that, we combine the power of advanced machine learning systems and our community itself to flag potentially problematic content. Our expert reviewers then remove flagged content that violates our Community Guidelines.
With hundreds of hours of new content uploaded to YouTube every minute, we use a combination of people and machine learning to detect problematic content at scale. Machine learning is well-suited to detect patterns, which helps us to find content similar to other content we’ve already removed, even before it’s viewed.
We also recognize that the best way to quickly remove content is to anticipate problems before they emerge. Our Intelligence Desk monitors the news, social media, and user reports to detect new trends surrounding inappropriate content, and works to make sure our teams are prepared to address them before they can become a larger issue.
The YouTube community also plays an important role in flagging content they think is inappropriate.
-
If you see content that you think violates Community Guidelines, you can use our flagging feature to submit content for review.
-
We developed the YouTube Trusted Flagger program to provide robust content reporting processes to non-governmental organizations (NGOs) with expertise in a policy area and government agencies. Participants receive training on YouTube policies and have a direct path of communication with our Trust & Safety specialists. Videos flagged by Trusted Flaggers are not automatically removed. They are subject to the same human review as videos flagged by any other user, but we may expedite review by our teams. NGOs also receive occasional online training on YouTube policies.
Sometimes videos that might otherwise violate our Community Guidelines may be allowed to stay on YouTube if the content offers a compelling reason with visible context for viewers. We often refer to this exception as “EDSA,” which stands for “Educational, Documentary, Scientific or Artistic”. To help determine whether a video might qualify for an EDSA exception, we look at multiple factors, including the video title, descriptions, and the context provided.
EDSA exceptions are a critical way we make sure that important speech stays on YouTube, while protecting the wider YouTube ecosystem from harmful content.
Machine learning systems help us identify and remove spam automatically, as well as remove re-uploads of content we’ve already reviewed and determined violates our policies. YouTube takes action on other flagged videos after review by trained human reviewers. They assess whether the content does indeed violate our policies, and protect content that has an educational, documentary, scientific, or artistic purpose. Our reviewer teams remove content that violates our policies and age-restrict content that may not be appropriate for all audiences. Reviewers' inputs are then used to train and improve the accuracy of our systems at a much larger scale.
If our reviewers decide that content violates our Community Guidelines, we remove the content and send a notice to the Creator. The first time a Creator violates our Community Guidelines, they receive a warning with no penalty to the channel. After one warning, we’ll issue a Community Guidelines strike to the channel and the account will have temporary restrictions including not being allowed to upload videos, live streams, or stories for a 1-week period. Channels that receive three strikes within a 90-day period will be terminated. Channels that are dedicated to violating our policies or that have a single case of severe abuse of the platform, will bypass our strikes system and be terminated. All strikes and terminations can be appealed if the Creator believes there was an error, and our teams will re-review the decision.
Sometimes content doesn't violate our Community Guidelines, but may not be appropriate for viewers under 18 years of age. In these cases, our review team will place an age restriction on the video so it will not be visible to viewers under 18 years of age, logged-out users, or to those who have Restricted Mode enabled. Creators can also choose to age restrict their own content at upload if they think it’s not suitable for younger audiences.
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
YouTube doesn’t allow anything that artificially increases the number of views, likes, comments, or other metrics either by using automatic systems or serving up videos to unsuspecting viewers. Also, content that solely exists to incentivize viewers for engagement (views, likes, comments, etc) is prohibited.
Content and channels that don't follow this policy may be terminated and removed from YouTube.
Important: If you hire someone to promote your channel, their decisions may impact your channel. Any method that violates our policies may result in content removal or a channel takedown, whether it's an action taken by you or someone you've hired.
We consider engagement to be legitimate when a human user’s primary intent is to authentically interact with the content. We consider engagement illegitimate, for example, when it results from coercion or deception, or when the sole purpose of the engagement is financial gain.
If you find content that violates this policy, report it. Instructions for reporting violations of our Community Guidelines are available here. If you've found a few videos or comments that you would like to report, you can report the channel.
Don’t post content on YouTube if it fits any of the descriptions noted below.
- Links to or promotes third-party services that artificially inflate metrics like views, likes, and subscribers
- Content linking to or promoting third-party view count or subscriber gaming websites or services
- Offering to subscribe to another creator’s channel only if they subscribe to your channel (“sub4sub”)
- Note: You're allowed to encourage viewers to subscribe, hit the like button, share, or leave a comment
- Content featuring a creator purchasing their views from a third party with the intent of promoting the service
This policy applies to videos, video descriptions, comments, live streams, and any other YouTube product or feature. Keep in mind that this isn't a complete list.
Subscriber numbers displayed in the following places update in real time:
- Your channel homepage
- Account switcher
- Video watch page
- Third-party sites and apps using the YouTube Data API
The number of subscribers in YouTube Analytics may be different than the subscriber count on your YouTube channel. The number in YouTube Analytics is approximately 48 hours behind. The delay lets us perform extra verification and spam reviews so the numbers are accurate.
Page traffic found to be artificial will not be counted on YouTube and can lead to strikes on your account. Suspended accounts and subscribers that are identified as spam will not count toward your total number of subscribers or views. These aren't active viewers, so their removal shouldn’t impact your views or watch time.
If you've had a video removed for view count gaming, check out this page in the Help Center to learn more.
Here are some examples of content that’s not allowed on YouTube.
- A video testimonial in which a creator shows themselves successfully purchasing artificial page traffic from a third party
- A video in which a creator links to a third party artificial page traffic provider in a promotional or supportive context. For example: “I got 1 million subscribers on this video in a day and you can too!”
- A video that tries to force or trick viewers into watching another video through deceptive means (for example: a misleadingly labeled info card)
- Channels dedicated to artificial channel engagement traffic or promoting businesses that exist for this sole purpose
Remember these are just some examples, and don't post content if you think it might violate this policy.
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
Content intended to impersonate a person or channel is not allowed on YouTube. YouTube also enforces trademark holder rights. When a channel, or content in the channel, causes confusion about the source of goods and services advertised, it may not be allowed.
If you see content that violates this policy, report it.
- If you feel that yours or another creator's channel is being impersonated, you can report the channel.
Don’t post content on YouTube if it fits any of the descriptions noted below.
- Channel impersonation: A channel that copies another channel's profile, background, or overall look and feel in such a way that makes it look like someone else's channel. The channel does not have to be 100% identical, as long as the intent is clear to copy the other channel.
- Personal impersonation: Content intended to look like someone else is posting it.
If you operate a fan channel, make sure you state so explicitly in your channel name or handle. It should be obvious to your viewers that your channel doesn't represent the original creator, artist or entity your channel is celebrating.
Here are some examples of content that’s not allowed on YouTube.
- Channels with the same identifier (channel name or handle) and image as another channel, with the only difference being a space inserted into the name or a zero replacing the letter O.
- Using someone else's real name, user name, image, brand, logo, or other personal information to trick people into believing you are that person.
- Setting up a channel using the same identifier (channel name or handle) and image of a person, and then pretending that person is posting content to the channel.
- Setting up a channel using the name and image of a person, and then posting comments on other channels as if they were posted by the person.
- Channels claiming to be a 'fan account' in the channel description, but not stating so clearly in the channel name or handle, or posing as another’s channel and reuploading their content.
- Channels impersonating an existing news channel.
Remember these are just some examples, and don't post content if you think it might violate this policy.
If your content violates this policy, we may terminate your channel or account. Learn more about channel or account terminations.
Links that send users to content that violates our Community Guidelines are not allowed on YouTube. If you find content that violates this policy, report it. Note: Certain links may not be clickable. Learn more here.
Don’t post links in your content on YouTube if they direct users to content that violates our Community Guidelines. This policy includes links that fit any of the descriptions noted below. Please note this is not a complete list.
- Links to pornography
- Links to websites or apps that install malware or unwanted software
- Links to websites or apps phishing for a user’s sign in info, financial info, etc.
- Links to websites, apps, or other sources that give unauthorized access to audio content, audiovisual content, video games, software, or streaming services that normally require payment
- Links to websites that seek to raise funds or recruit for terrorist organizations
- Links to sites containing Child Sexual Abuse Imagery (CSAI)
- Links to sites selling items noted in our regulated goods guidelines
- Links to content that would violate our hate or harassment policies
- Links to content encouraging others to commit violent acts
- Links to content that spread medical misinformation contradicting local health authorities’ (LHA) or the World Health Organization’s (WHO) medical info about COVID-19
- Links to websites or apps that spread misleading or deceptive content that can cause serious risk of egregious harm, such as interfering with democratic processes
- Links to external sites that contain manifestos of violent attackers
This policy applies to video, audio, channel, comments, pinned comments, live streams, and any other YouTube product or feature. Links can take any form that would direct a user to a site off YouTube. These links include: clickable URLs, showing text of URLs in videos or images, and obfuscated URLs (such as writing “dot com” instead of “.com”). These links can also include verbally directing users to other sites, encouraging viewers to visit creator profiles or pages on other sites, or promising violative content on other sites. This list is not complete.
Note: Affiliate content doesn't violate YouTube’s Terms of Use. Excessively posting affiliate content in dedicated accounts may violate our policies around spam. You can learn more about what's allowed in our Spam, deceptive practices & scams policies.
Here are some examples of content that’s not allowed on YouTube.
- A video featuring sexually themed content whose description says “click to see what YouTube won’t allow!” and contains a link to a pornographic site.
- A gameplay video description contains a link promising in-game currency or online store credit but links to a site that infects the user’s computer with malware.
- Posting a link to a phishing site that steals users’ banking info and passwords.
- Instructing viewers to copy and paste an unclickable link in the video that takes them to a pornographic or spammy site.
- Any link leading users to a website, file hosting service, or other source that allows them to access or download child sexual abuse imagery.
- Verbally directing viewers to a website to find a profile or page on another platform so they can watch content that violates YouTube’s Community Guidelines.
- Embedding a URL in a video of a site that would mislead voters about the time, place, means, or eligibility requirements for voting.
- A link to an article claiming that COVID-19 vaccines are part of a depopulation agenda.
Remember that this list is not complete. If you think content might violate this policy, don’t post it.
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
Note: We recently reorganized our Community Guidelines to provide further clarity around our policies related to Misinformation on YouTube. To review these policies, check out our articles on Misinformation and Elections misinformation.
YouTube doesn’t allow spam, scams, or other deceptive practices that take advantage of the YouTube community. We also don’t allow content where the main purpose is to trick others into leaving YouTube for another site.
If you find content that violates this policy, report it. Instructions for reporting violations of our Community Guidelines are available here. If you've found a few videos or comments that you would like to report, you can report the channel.
Don’t post content on YouTube if it fits any of the descriptions noted below.
- Video Spam: Content that is excessively posted, repetitive, or untargeted and does one or more of the following:
- Promises viewers they'll see something but instead directs them off site.
- Gets clicks, views, or traffic off YouTube by promising viewers that they’ll make money fast.
- Sends audiences to sites that spread harmful software, try to gather personal info, or other sites that have a negative impact.
- Misleading Metadata or Thumbnails: Using the title, thumbnails, description to trick users into believing the content is something it is not.
- Scams: Content offering cash gifts, “get rich quick” schemes, or pyramid schemes (sending money without a tangible product in a pyramid structure).
- Incentivization Spam: Content that sells engagement metrics such as views, likes, comments, or any other metric on YouTube. This type of spam can also includes content where the only purpose is to boost subscribers, views, or other metrics. For example, offering to subscribe to another creator’s channel solely in exchange for them subscribing to your channel, also known as "Sub4Sub" content.
- Comments Spam: Comments where the sole purpose is to gather personal info from viewers, misleadingly drive viewers off YouTube, or perform any of the prohibited behaviors noted above.
- Repetitive comments: Leaving large amounts of identical, untargeted or repetitive comments.
- 3rd party content: Live streams that include unauthorized 3rd party content that are not corrected after repeated warnings of possible abuse. Channel owners should actively monitor their live streams and correct any potential issues in a timely manner.
This policy applies to videos, video descriptions, comments, live streams, and any other YouTube product or feature. Keep in mind that this isn't a complete list. Please note these policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video, as well as other forms.
Note: You're allowed to encourage viewers to subscribe, hit the like button, share, or leave a comment.
The following types of content are not allowed on YouTube. Keep in mind this list isn't a complete list.
- Content that promises viewers they'll watch something but instead directs them off site to view.
- Posting the same content repeatedly across one or more channels.
- Massively uploading content that you scraped from other creators.
- Trying to get viewers to install harmful software, or directing them to sites that might compromise their privacy.
- Autogenerated content that computers post without regard for quality or viewer experience.
- Promising money, products, software, or gaming perks at no charge if viewers install software, download an app, or perform other tasks.
- Massively posting affiliate content in dedicated accounts.
- Repeatedly uploading content that you don’t own and that isn’t EDSA.
The following types of content are not allowed on YouTube. Keep in mind this list isn't a complete list.
- A thumbnail with a picture of a popular celebrity that has nothing to do with the content.
- Using the title, thumbnails, or description to trick users into believing the content is something it is not. For example, when there's a serious risk of egregious real world harm.
The following types of content are not allowed on YouTube. Keep in mind this list isn't a complete list.
- Making exaggerated promises, such as claims that viewers can get rich fast or that a miracle treatment can cure chronic illnesses such as cancer.
- Promoting cash gifting or other pyramid schemes.
- Accounts dedicated to cash gifting schemes.
- Videos that promise "You'll make $50,000 tomorrow with this plan!"
The following types of content are not allowed on YouTube. Keep in mind this list isn't a complete list.
- Videos where the purpose is to encourage viewers to subscribe.
- "Subs 4 Subs" videos.
- Videos that offer "likes" for sale.
- A video that offers to give the channel to the 100,000th subscriber without any other content.
The following types of content are not allowed on YouTube. Keep in mind this list isn't a complete list.
- Comments about surveys or giveaways that promote pyramid schemes.
- "Pay Per Click" referral links in comments.
- Comments that falsely claim to offer full video content. This type of content could be:
- Movies
- TV shows
- Concerts
- Posting links to harmful software or phishing site in comments: "omg just got tons of B∪cks from here! - [xyz phishing site].com"
- Comments with links to counterfeit stores.
- "Hey, check out my channel/video here!” when the channel/video has nothing to do with the video it was posted in.
- Posting the same comment repeatedly with a link to your channel.
The following types of content are not allowed on YouTube. Keep in mind this list isn't a complete list.
- Using your phone to stream a television show.
- Using 3rd party software to livestream songs from an album.
Remember these are just some examples, and don't post content if you think it might violate this policy.
If your content violates this policy, we may suspend your monetization or terminate your channel or account. Learn more about monetization policies) and channel or account terminations.
For some violations, we may remove the content and issue a warning or a strike against your channel. If this happens, we’ll send you an email to let you know.
You can take a policy training to allow the warning to expire after 90 days. However, if your content violates the same policy within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning. We may prevent repeat offenders from taking policy trainings in the future.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strike system.
Playlists are a great way to combine videos that your community may want to watch as a series. We know it’s not often intentional, but there may be times when playlists contain content that’s not allowed on the platform and can cause harm to our community. This means that playlists that violate our Community Guidelines are not allowed on YouTube.
Here’s a simple way to think of it: if you were to combine all the playlists’ videos into one single video, and that video were to violate our Community Guidelines, then the playlist may violate Community Guidelines as well.
If you find content that violates this policy, please report it. Instructions for reporting violations of our Community Guidelines are available here. If you find many videos, comments, or a creator's entire channel that you wish to report, visit our reporting tool.
Don’t post playlists on YouTube if they fit any of the descriptions noted below.
- Playlists with thumbnails, titles or descriptions that violate our community guidelines, such as those that are pornographic, or that consist of images that are intended to shock or disgust.
- Playlists with titles or descriptions that mislead viewers into thinking they’re about to view videos different than what the playlist contains.
- Playlists with videos that don’t individually violate our policies, but are collected in a way that violates the guidelines. This includes but is not limited to:
- Educational content featuring nudity or sexual themes for the purpose of sexual gratification
- Non-sexual content but focus on specific body parts or activities for sexual gratification
- Documentary videos of graphic violence for the purpose of glorifying or shocking
- Playlists that include multiple videos that have been removed for violating our guidelines. If you notice that multiple videos in your public playlists have been removed or deleted, please take some time to remove those videos from your playlists as well. If you notice that some videos in your public playlists violate our Community Guidelines, please flag them and remove them from your playlist.
- Playlists that depict physical, sexual, or emotional mistreatment of minors.
Please note this is not a complete list.
Here are some examples of content that’s not allowed on YouTube.
- A playlist of news footage of aerial bombings accompanied by a title such as “Best bombings”.
- A playlist with a title that calls for the segregation of people with intellectual disabilities.
- A playlist that posts an individual’s nonpublic personal identifying information like a phone number, home address, or email for the express purpose of directing abusive attention or traffic toward them.
- A playlist that collects videos of dangerous or threatening pranks, such as a playlist of fake home invasions or robberies.
- A playlist of videos featuring minors accompanied by a title such as “sexy".
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
If you post content that encourages other users to violate our Terms of Service, the content may be removed, your account may be penalized, and in some cases your account may be terminated.
Posting content previously removed for violating our Terms of Service, content from creators with a current channel restriction, or content from creators who have been terminated is considered circumvention under our Terms of Service.
If you post such content, it may be removed, and your YouTube channel may also be penalized or terminated. This may also apply to other channels you own.
We may ask you to confirm your age if we think that you are not old enough to use YouTube. You can learn more about this process here.
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
Update: Content that targets young minors and families but contains sexual themes, violence, obscene, or other mature themes not suitable for young audiences, is not allowed on YouTube. In addition to your titles, descriptions, and tags, ensure your audience selection matches the audience your content is suitable for.
YouTube doesn’t allow content that endangers the emotional and physical well-being of minors. A minor is someone under 18 years old.
If you find content that violates this policy, report it. If you believe that a child is in danger, you should get in touch with your local law enforcement to report the situation immediately.
Instructions for reporting violations of our Community Guidelines are available here. If you've found multiple videos or comments that you would like to report, you can report the channel.
Don’t post content on YouTube if it fits any of the descriptions below.
- Sexualization of minors: Sexually explicit content featuring minors and content that sexually exploits minors including minor nudity posted with comedic intent. We report content containing child sexual abuse imagery to the National Center for Missing and Exploited Children, who work with global law enforcement agencies.
- Harmful or dangerous acts involving minors: Content showing a minor participating in dangerous activities or encouraging minors to do dangerous activities, particularly if someone watching could imitate the dangerous act or if the content encourages or praises the dangerous act. Never put minors in harmful situations that may lead to injury, including dangerous stunts, challenges, dares, or pranks. Dangerous acts include, but are not limited to, any act in the categories listed under Extremely Dangerous challenges, such as asphyxiation or electrocution.
- Examples include content that shows minors:
- Drinking alcohol
- Using vaporizers, e-cigarettes, tobacco, or marijuana
- Misusing fireworks
- Using firearms unsupervised
- Examples include content that shows minors:
- Inflicting or advocating for the infliction of physical, sexual or emotional maltreatment or neglect of a child, including inflicting emotional distress on minors.
- Content that contains infliction of physical, sexual, or emotional abuse of a child within an educational, documentary, scientific, or artistic context and with blurring may receive an exception.
- Content that could cause minor participants or viewers emotional distress, including:
- Exposing minors to mature themes
- Simulating parental abuse
- Coercing minors
- Violence
- Misleading family content: Content that targets young minors and families, but contains:
- Sexual themes
- Violence
- Obscenity or other mature themes not suitable for young audiences
- Medical procedures
- Self harm
- Use of adult horror characters
- Other inappropriate themes intended to shock young audiences
- Content that targets young minors and families with age-inappropriate themes within an educational, documentary, scientific, or artistic context may receive an exception. This is not a pass to target young minors and families with mature themes intended to shock young audiences.
- Family friendly cartoons that target young minors and contain adult or age-inappropriate themes such as violence, sex, death, drugs, and more. We don’t allow content labeled as suitable for kids in the video's title, description, tags, or in the audience selection if it contains age-inappropriate themes.
- Make sure your titles, descriptions, and tags match the audience you're targeting. In addition, ensure your audience selection accurately represents the audience your content is suitable for. You can also age restrict your content upon upload if it’s intended for mature audiences.
- Cyberbullying and harassment involving minors: Content that:
- Intends to shame, deceive or insult a minor
- Reveals personal information like email addresses or bank account numbers
- Contains sexualization
- Encourages others to bully or harass
This policy applies to videos, video descriptions, comments, Stories, Community posts, live streams, playlists, and any other YouTube product or feature. Keep in mind that this isn't a complete list.
Please note these policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video, as well as other forms.
Age-restricted content
We may add an age restriction to content that includes any of the following.
- Harmful or dangerous acts that adults or minors could imitate: Content containing adults participating in dangerous activities that adults or minors could easily imitate.
- Note: Simply saying, “Don’t try this at home” may not result in an exception. Learn more about age restricted Harmful or Dangerous content.
- Adult themes in family content: Content meant for adult audiences but could easily be confused with family content. This includes cartoons that contain adult themes such as violence, sex, or death. Remember you can age restrict your content upon upload if it’s intended for mature audiences.
- Vulgar language: Some language is not appropriate for younger audiences. Content using sexually explicit language or excessive profanity may lead to age restriction.
Content featuring minors
To protect minors on YouTube, content that doesn’t violate our policies but features children may have some features disabled at both the channel and content level. These features may include:
- Comments
- Live chat
- Live streaming
- Video recommendations (how and when your video is recommended)
- Community posts
- Shorts Video remixing
How to protect minors in your content
Before posting content of yourself, your family, or friends, think carefully about whether it may put anyone at risk of negative attention. Minors are a vulnerable population, and YouTube has policies to protect them from unwanted attention.
- Make sure the minor is supervised by an adult and is performing age-appropriate activities such as demonstrating hobbies, educational content or public performances.
- Make sure the attire worn is age-appropriate. Avoid attire that overexposes the minor or is form-fitting.
- Use YouTube's privacy settings to limit who can see the videos you post.
Don’t post content on YouTube that features minors and meets one or more of the following:
- Filmed in private spaces at home such as bedrooms or bathrooms.
- Features minors soliciting contact from strangers, dares or challenges online, or discussing adult topics.
- Shows activities that could draw undesired attention to the minor, such as performing body contortions or ASMR.
- Reveals personal details about a minor.
These are just some examples, you can get more best practices for child safety here. If you are under 18 or the applicable age of majority in your country, Be internet awesome can help you be safe online.
Here are some examples of content not allowed on YouTube.
- Videos or posts featuring minors involved in provocative, sexual, or sexually suggestive activities, challenges and dares, such as kissing or groping.
- Showing minors involved in dangerous activities. For example, physical stunts, using weapons or explosives, or using a controlled substance like alcohol or nicotine including use of vapes or e-cigarettes.
- A video with tags like "for children", or whose audience is set to “Yes, it’s made for kids”, featuring family friendly cartoons engaging in inappropriate acts like injecting needles.
Remember these are just some examples, and don't post content if you think it might violate this policy.
- Offering money, praise, likes, or any other incentive to a minor to participate in physical contact with someone else.
- A video or post that advertises sexual content featuring minors or abusive content featuring minors.
- Predatory behavior involving communications with or about minors.
- Aggregating innocent content of minors for the purposes of sexual gratification.
- Fight or bullying content featuring kids without educational, documentary, scientific or artistic context and blurring.
- Challenges, pranks, or stunts that pose the risk of physical injury or serious emotional distress. You can learn more about what’s not allowed in our policies around challenges and pranks.
- Encouraging minors to participate in dangerous activities, even if there are no minors in the content.
- Content simulating parental abuse or abandonment, simulating exposure to death or violence, or causing minors intense shame or humiliation.
- Using cartoons, puppets, or family entertainment characters to appeal to children where content features adult themes like violence and sex.
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
We have zero tolerance for predatory behavior on YouTube. If we think a child is in danger based on reported content, we’ll help law enforcement investigate the content.
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
Thumbnails and other images that violate our Community Guidelines aren’t allowed on YouTube. Images include banners, avatars, Community posts, and any other YouTube feature that has images.
If you find thumbnails or other images that violate this policy, report them. If you find a few videos or comments that you want to report, you can report the channel.
Don’t post a thumbnail or other image on YouTube if it shows:
- Pornographic imagery
- Sexual acts, the use of sex toys, fetishes, or other sexually gratifying imagery
- Nudity, including genitals
- Imagery that depicts unwanted sexualization
- Violent imagery that intends to shock or disgust
- Graphic or disturbing imagery with blood or gore
- Vulgar or lewd language
- A thumbnail that misleads viewers to think they’re about to view something that’s not in the video
Note: The above list isn’t complete.
Sometimes, a thumbnail may not be appropriate for all audiences, but it doesn’t violate our Community Guidelines. When that happens, we may age-restrict the video, or we may remove the thumbnail, but we don’t issue a strike on your channel. If we remove a thumbnail, we let you know, and you can upload another thumbnail.
Here’s what we consider when we remove or age-restrict these kinds of thumbnails:
- Whether breasts, buttocks, or genitals are the focal point of the thumbnail
- Whether the subject is depicted in a pose or clothing that is intended to sexually arouse the viewer
- Whether violent or gory imagery is the focal point of the thumbnail
- Whether written text is intended to be vulgar or shock or disgust viewers
- Whether the title, description, tags, or other data indicate an intent to shock or disgust viewers
If your thumbnail contains pornography, we may terminate your channel. If your thumbnail violates other policies, we remove the thumbnail and may issue a strike against your account. If it's the first time you’ve posted content that violates our Community Guidelines, you get a warning with no penalty to your channel. You can take a policy training to allow the warning to expire after 90 days. However, if one of your thumbnails violate the same policy within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get three strikes in 90 days or your channel is dedicated to violative content, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
Note: On September 7, 2022, we updated our Nudity and sexual content policy to more consistently enforce our Community Guidelines. You can learn more about these changes on our forum. This policy has been updated with these changes.
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
Explicit content meant to be sexually gratifying is not allowed on YouTube. Posting pornography may result in content removal or channel termination. Videos containing fetish content will be removed or age-restricted. In most cases, violent, graphic, or humiliating fetishes are not allowed on YouTube.
If you find content that violates this policy, report it. Instructions for reporting violations of our Community Guidelines are available here. If you've found a few videos or comments that you would like to report, you can report the channel.
Sexually explicit content featuring minors and content that sexually exploits minors is not allowed on YouTube. We report content containing child sexual abuse imagery to the National Center for Missing and Exploited Children, who work with global law enforcement agencies.
Explicit content that violates this policy could result in channel termination. This policy applies to real-world, dramatized, illustrated, and animated content, including sex scenes, video games, and music.
Don’t post content on YouTube if it shows:
- The depiction of clothed or unclothed genitals, breasts, or buttocks that are meant for sexual gratification.
- Pornography, the depiction of sexual acts, or fetishes that are meant for sexual gratification.
- Masturbation
- Fondling or groping of genitals, breasts, or buttocks
- Using sex toys to give viewers sexual gratification
- Nudity or partial nudity that’s meant for sexual gratification
- Non-consensual sex acts or the promotion or glorification of non-consensual sex acts, such as sexual assault, incest, bestiality, or zoophilia
- Unwanted sexualization such as non-consensually shared imagery or voyeurism
- Wardrobe accidents or nude photo leaks
- Non-consensual zooming in or prolonged focused or emphasis on the breasts, buttocks or genital area for the purpose of sexual gratification
- Violent, graphic, or humiliating fetish content where the purpose is sexual gratification
- Aggregating content that’s meant for sexual gratification
- Any sexual content involving minors — see our Child Safety on YouTube page for more information
Note: The above list isn't complete.
Age-restricted content
We may age-restrict content if it includes nudity or other sexual content but doesn’t depict anything described above. We consider the following when deciding whether to age-restrict or remove content.
- Whether clothed or unclothed breasts, buttocks or genitals are the focal point of the video
- Whether the subject is depicted in a pose that is intended to sexually arouse the viewer
- Whether the language used in the video is graphic or lewd
- Whether the subject's actions in the video encourage sexual arousal, such as by touching of breasts or genitals, or revealing undergarments
- Whether the clothing would be generally unacceptable in public contexts, such as lingerie
- Whether sexual imagery or audio has been blurred, masked, or obscured
- Whether sexual imagery or audio is fleeting or prolonged in the content
- Whether the content invites others to participate in a challenge involving sexual acts
Note: The above list isn’t complete.
This policy applies to videos, video descriptions, comments, live streams, audio, and any other YouTube product or feature. Remember these are just some examples, and don't post content if you think it might violate this policy.
We may allow sexual content when the primary purpose is educational, documentary, scientific, or artistic, and it isn’t gratuitous. For example, a documentary on breast cancer that shows nude breasts would be appropriate, but posting clips out of context to sexually gratify from the same documentary is not. Out of context nudity in indigenous communities, during medical procedures, during childbirth, during artistic performances, or during breastfeeding may not meet our documentary exception.
The same applies to depictions of sex scenes in artistic content such as films, audio stories, music or video games. For example, a film with a sex scene may be allowed if it included details such as the name of the film, director, actors in the video content and in the video description. Remember that giving context in the content, title, and description will help us and your viewers determine the primary purpose of the video.
For educational, documentary, scientific, or artistic content that has adult material or graphic violence, we may take into account official third-party ratings to decide whether the content may remain on YouTube. Content that follows our policies but isn’t appropriate for all audiences is age-restricted. Age-restricted content isn’t viewable to anyone who’s under 18 years of age or signed out.
Here are some examples of content that’s not allowed on YouTube.
- Clips extracted from non-pornographic films, shows, or other content in order to isolate sexual content (real world or artistic)
- Groping, kissing, public masturbation, “upskirting”, voyeurism, predatory exhibitionism, or any other content that depicts someone in a sexualized manner without their consent
- Content that depicts sexual acts, behaviors, or sex toys that’s meant for sexual gratification
Note: The above are just some examples. If you think content might violate this policy, don’t post it.
- Playlists that aggregate content containing nudity or sexual themes for the purpose of sexual gratification
- Provocative dancing that is focused on the dancer’s genitals, buttocks, or breasts, or that includes fondling or groping
- Content that sexualizes rape in any form, or content that aggregates clips of dramatized rape scenes
- Audio or textual depictions of sexual acts for the purpose of sexual gratification
- Content showing bodily fluids or excretion, such as urine, for the purpose of sexual gratification
- Content using otherwise everyday objects or scenarios, such as injections or eating, for the purpose of sexual gratification
- Video game content which has been developed or modified (“modded”) for sexual gratification, or focuses on themes of unwanted sexualization
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
If your content contains pornography, we may terminate your channel.
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
Note: On April 18, 2023, we updated our Eating disorders policy to better protect the community from sensitive content that may pose a risk to some audiences. We may remove imitable content, age-restrict content, or show a crisis resource panel on videos about eating disorders or self-harm topics. The below policy is updated with these changes. You can learn more about our approach in this blog post.
At YouTube, we take the health and well-being of all our creators and viewers seriously. Awareness and understanding of mental health is important and we support creators sharing their stories, such as posting content discussing their experiences with depression, self-harm, eating disorders, or other mental health issues.
However, we do not allow content on YouTube that promotes suicide, self-harm, or eating disorders, that is intended to shock or disgust, or that poses a considerable risk to viewers.
If you believe someone is in danger:
- Get in touch with local emergency services for help
- Flag the video to bring it to our attention
If you find yourself being negatively affected by any mental health, suicide, self-harm, or eating disorder related content you come across, know that there is support available and you’re not alone. In the next section you can find a list of resources and contact information for organizations that can offer advice.
For general guidance on how to talk to someone who you may be concerned about, contact local helplines.
If you are depressed, having thoughts of suicide, self-harming, or experiencing an eating disorder, know there is help and you’re not alone. While coping with painful emotions, many people might experience these issues. Talking to a mental health care provider can help determine if you have a mental illness that requires care. It can also help you identify healthy, effective coping strategies and develop skills to manage difficult feelings.
Below is a list of organizations dedicated to helping those in need in different countries and regions. These are recognized crisis service partners. Partnerships vary by country/region.
The websites findahelpline.com and www.wikipedia.org/wiki/List_of_suicide_crisis_lines could help you find organizations for regions not listed here.
To read tips and watch videos that can help you feel safer on YouTube, visit the Creator Safety Center.
Below is a list of organizations that help individuals with eating disorders. These organizations are mental health support partners. Partnerships vary by country/region.
Australia | Butterfly Foundation | 1800 33 4673 |
Brazil | Centro de Valorização da Vida | 188 |
Canada | National Eating Disorder Information Centre | 18666334220 |
Canada | Anorexie et Boulimie Québec | 18006300907 |
France | Fédération Française Anorexie Boulimie | 09 69 325 900 |
Germany | BZgA - Essstörungen.de | |
India | Vandrevala Foundation | 91 9999 666 555 |
Japan | 摂食障害相談ほっとライン | 0477108869 |
Mexico | Línea de la Vida | 8009112000 |
South Korea | 국립정신건강센터 | 15770199 |
United Kingdom | BEAT Eating Disorders | 0808 801 0677 England |
0808 801 0432 Scotland | ||
0808 801 0433 Wales | ||
0808 801 0434 N. Ireland | ||
United States of America | Substance Abuse and Mental Health Services Administration | 1-800-662-4357 |
YouTube users should not be afraid to speak openly about the topics of mental health, suicide, self-harm, and eating disorders in a supportive and non-harmful way.
However, there are times when content is created that is sensitive and may pose a risk for some users. When you create content that contains suicide, self-harm, or eating disorder related topics, take into account the possible negative impact of your content on other users, especially minors and users who may be sensitive to this content.
To protect and support your viewers and other users, please follow the Community Guidelines below when creating content related to suicide, self-harm, or eating disorders. Not following these Community Guidelines may result in a strike, removal of your content, or other restrictions to protect users. Learn more.
This Community Guidelines policy applies to videos, video descriptions, comments, live streams, and any other YouTube product or feature.Keep in mind that this isn't a complete list. Please note these policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video, as well as other forms.
- Content promoting or glorifying suicide, self-harm, or eating disorders
- Instructions on how to die by suicide, engage in self-harm, or engage in eating disorders (including how to conceal them)
- Content related to suicide, self-harm, or eating disorders that is targeted at minors
- Graphic images of self-harm
- Visuals of bodies of suicide victims unless blurred or covered so they are fully obscured
- Videos showing the lead-up to a suicide, or suicide attempts and suicide rescue footage without sufficient context
- Content showing participation in or instructions for suicide and self-harm challenges (e.g. Blue Whale or Momo challenges)
- Suicide notes or letters without sufficient context
- Content that features weight-based bullying in the context of eating disorders
In some cases we may restrict, rather than remove, suicide, self-harm, or eating disorder content if it meets one or more of the following criteria (for example, by placing an age-restriction, a warning, or a Crisis Resource Panel on the video). Please note this is not a complete list:
- Content that is meant to be educational, documentary, scientific, or artistic
- Content that is of public interest
- Graphic content that is sufficiently blurred
- Dramatizations or scripted content, which includes but is not limited to animations, video games, music videos, and clips from movies and shows
- Detailed discussion of suicide or self-harm methods, locations and hotspots
- Graphic descriptions of self-harm or suicide
- Eating disorder recovery content that includes details which may be triggering to at-risk viewers
We recommend using these best practices in content related to suicide or self-harm to protect your viewers from harm and distress:
- Avoid showing the person who died by suicide, and respect their, and their families’, privacy. Learn more.
- Use wording that is positive and supportive, and focuses on recovery, prevention, and stories of hope.
- Include information and resources for suicide and self-harm prevention and coping strategies. Try to include it in both the video itself and the description of the video.
- Do not use sensationalist language or dramatic visuals.
- Provide context, but avoid discussing how the victim died by suicide. Do not mention the methods or locations.
- Blur content that contains images of suicide victims. You can blur your video with the Editor in YouTube Studio. Learn more.
We recommend using these best practices in content related to eating disorders to protect your viewers from harm and distress:
- Focus on the impact of the disorder instead of the details of the disordered eating behavior.
- Tell your audience that eating disorders commonly cause severe complications.
- Include info and resources for eating disorder prevention and coping strategies. Try to include it in both the video itself and the description of the video.
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
Finally we may also limit your access to live streaming if you suggest that you’ll live stream content that will violate our Community Guidelines. Learn more about restrictions on live streaming.
YouTube may show features or resources to users when content contains suicide or self-harm topics. For example:
- A warning on your video before it starts playing, indicating that it contains content relating to suicide and self-harm
- A panel under the video containing supportive resources such as phone numbers of suicide prevention organizations
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
Some language may not be appropriate for viewers under 18.
If you find content that violates this policy, report it. Instructions for reporting violations of our Community Guidelines are available here. If you've found a few videos or comments that you would like to report, you can report the channel.
If you're posting content
Explicit content that violates this policy could result in age restriction, content removal, or a strike. We may consider the following factors when deciding whether to age-restrict, remove content, or issue a strike.
- Use of sexually explicit language or narratives
- Use of excessive profanity in the content
- Use of heavy profanity or sexually suggestive terms in the content’s title, thumbnail, or associated metadata
- Use of excessive sexual sounds
Note: The above list isn’t complete.
Here are some examples of content that may be age-restricted:
- A video focused on the use of profanities, like a compilation, song, or clip taken out of context
- A video that uses heavy profanities in the title
- A video that repeatedly uses vulgar or sexual language
This policy applies to videos, video descriptions, comments, live streams, audio, and any other YouTube product or feature. Remember these are just some examples, and don't post content if you think it might violate this policy.
We may allow vulgar language when the primary purpose is educational, documentary, scientific, or artistic, and it isn’t gratuitous. For example, the title of a song with a curse word or a song that contains large amounts of profanity. Remember that giving context in the content, title, and description will help us and your viewers determine the primary purpose of the video.
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
We don’t allow content that targets someone with prolonged insults or slurs based on their physical traits or protected group status, like age, disability, ethnicity, gender, sexual orientation, or race. We also don’t allow other harmful behaviors, like threats or doxxing. Keep in mind that we take a stricter approach on content that targets minors.
If you find content that violates this policy, report it. Instructions for reporting violations of our Community Guidelines are available here. If you find many videos or comments that you would like to report, you can report the channel. For tips about how to stay safe, keep your account secure, and protect your privacy, check out the Creator Safety Center and Stay safe on YouTube.
If specific threats are made against you and you feel unsafe, report it directly to your local law enforcement agency.
Don’t post content on YouTube if it fits any of the descriptions noted below.
- Content that contains prolonged insults or slurs based on someone's intrinsic attributes. These attributes include their protected group status, physical attributes, or their status as a survivor of sexual assault, non-consensual intimate imagery distribution, domestic abuse, child abuse and more.
- Content uploaded with the intent to shame, deceive or insult a minor. This means intending to make a minor feel unpleasant emotions, like distress, shame, or worthlessness; intending to trick them to behave in ways that may harm themselves or their property; or engaging in name-calling toward them. A minor is someone under 18 years old.
Other types of content that violate this policy
- Content that shares, threatens to share, or encourages others to share non-public personally identifiable information (PII).
- PII includes, but isn’t limited to, home addresses; email addresses; sign-in credentials, like a username or password; phone numbers; passport numbers; medical records; or bank account information.
- This doesn't include posting widely available public information, like an official’s office phone number or the phone number of a business.
- This policy applies to sharing your own PII, sharing someone else’s PII, and situations where you accidentally share PII.
- Content must clearly indicate when fake PII is shared. For example, using fake login credentials as part of a training.
- Content that encourages abusive behavior, like brigading. Brigading is when an individual encourages the coordinated abuse of an identifiable individual on or off YouTube.
- Content that promotes harmful conspiracy theories or that targets someone by claiming they’re a part of a harmful conspiracy theory. A harmful conspiracy theory is one that has been linked to direct threats or violent acts.
- Content that threatens an identifiable individual or their property. This includes implicit threats that don’t specify a time or place but may feature a weapon, for example.
- Content that depicts a staged meet-up that is used to accuse an identifiable individual of egregious misconduct with a minor, without the presence of law enforcement.
- Content reveling in or mocking the death or serious injury of an identifiable individual.
- Content that realistically simulates deceased minors or victims of deadly or well-documented major violent events describing their death or violence experienced.
- Content that depicts creators simulating acts of serious violence against others. For example, executions, torture, maimings, beatings, and more.
- Content that contains stalking of an identifiable individual.
- Content that denies or minimizes someone’s role as a victim of a well-documented, major violent event.
- Content that contains unwanted sexualization of an identifiable individual. This includes:
- Content that describes someone in a lewd, degrading, and sexually explicit manner
- Content that shares, requests or shows how to distribute non-consensual intimate imagery
- Content that fantasizes about, threatens, or supports sexual assault
This policy applies to videos, video descriptions, comments, live streams, and any other YouTube product or feature. Keep in mind that this isn't a complete list. Note that these policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video and other forms.
If the primary purpose is educational, documentary, scientific, or artistic in nature, we may allow content that includes harassment. These exceptions are not a pass to harass someone. Some examples include:
- Debates related to high-profile officials or leaders: Content featuring debates or discussions of topical issues concerning individuals who have positions of power, like high-profile government officials or CEOs of major multinational corporations.
- Scripted performances: Insults made in the context of an artistic medium such as scripted satire, stand up comedy, or music (such as a diss track). Note: This exception is not a pass to harass someone and claim “I was joking.”
- Harassment education or awareness: Content that features actual or simulated harassment for documentary purposes or with willing participants (such as actors) to combat cyberbullying or raise awareness.
Note: We take a harder line on content that maliciously insults someone based on their protected group status, regardless of whether or not they are a high-profile individual.
In some rare cases, we may remove content or issue other penalties when a creator:
- Repeatedly encourages abusive audience behavior.
- Repeatedly targets, insults and abuses an identifiable individual based on their intrinsic attributes across several uploads.
- Exposes an individual to risks of physical harm based on the local social or political context.
- Creates content that harms the YouTube community by persistently inciting hostility between creators for personal financial gain.
Here are some examples of content that’s not allowed on YouTube:
- Repeatedly showing pictures of someone and then making statements like “Look at this creature’s teeth, they’re so disgusting!”, with similar commentary targeting intrinsic attributes throughout the video.
- Targeting an individual based on their membership in a protected group, such as by saying: “Look at this [slur targeting a protected group]!”
- Targeting an individual and making claims they are involved in human trafficking in the context of a harmful conspiracy theory where the conspiracy is linked to direct threats or violent acts.
- Using an extreme insult to dehumanize an individual based on their intrinsic attributes. For example: “Look at this dog of a woman! She’s not even a human being — she must be some sort of mutant or animal!”
- Targeting an individual and expressing a wish for their death or serious injury: “I hate her so much. I wish she’d just get hit by a truck and die.”
- Depicting an identifiable individual being murdered or seriously injured. For example: A video includes a clip from a movie, where one character is brutally shot and killed. The video is edited so that a real individual’s photo is placed over the actor’s face.
- Threatening someone’s physical safety. This includes explicit threats like, “When I see you on Saturday, I’m going to kill you.” This also includes implying violence by making statements like, “You better watch out, I’m coming for you” while brandishing a weapon.
- Posting an individual’s non-public personally identifiable information, like a phone number, home address, or email address, to direct abusive attention or traffic toward them. For example, saying, “I got a hold of their phone number. Call and leave messages until they pick up!”
- “Raiding” or directing malicious abuse to identifiable individuals through in-game voice chat or messages during a stream.
- Directing users to leave abusive comments in another creator's comment section.
- Linking to off platform sites that host or feature non-consensual intimate imagery.
- Requesting that other users get in touch to share non-consensual intimate imagery.
- “Swatting” or other prank calls to emergency or crisis response services, or encouraging viewers to act in this or any other harassing behavior.
- Stalking or attempting to blackmail users.
- Video game content which has been developed or modified (“modded”) to promote violence against an individual.
Remember, this list isn't exhaustive. Don't post content if you think it might violate this policy.
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
Starting Mar 18, 2024, our policy on Harmful & Dangerous content will be updated to include a stricter stance on audience disclaimers and updated guidelines to better assess the risk of potential harm of the act portrayed.
YouTube doesn’t allow content that encourages dangerous or illegal activities that risk serious physical harm or death.
If you find content that violates this policy, report it.
Skip to a specific section of this article:
- Read the full Harmful or dangerous content policy
- Get examples of Harmful or dangerous content
- Age-restricted content and content removal
- Educational, documentary, scientific, or artistic content
- What happens when content violates this policy
Important: This policy applies to videos, video descriptions, comments, live streams, and any other YouTube product or feature. These policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video, as well as other forms.
Note: The list below isn't complete. If you think that your content might violate this policy, don't post it.
This content isn't allowed on YouTube:
- Extremely dangerous challenges: Challenges that pose an imminent risk of physical injury.
- Dangerous or threatening pranks: Pranks that lead victims to fear imminent serious physical danger, or that create serious emotional distress in minors.
- Harmful or dangerous acts: Acts performed by adults that have a risk of serious harm or death.
- Minors participating in dangerous activities: Content that endangers the emotional and physical well-being of minors. For more info, review our Child safety policy.
- Instructions to kill or harm: Instructions that show or tell viewers how to perform activities that are meant to kill or severely harm others.
- Explosives: Giving instructions to make explosive devices or compounds meant to injure or kill others.
- Firearms: For more info, review our Firearms policy.
- Instructional theft: Instructional theft videos posted with the express intent to steal physical goods or get something for free.
- Hacking: Demonstrating how to use computers or information technology with the intent to steal credentials, compromise personal data, or cause serious harm to others.
- Bypassing payment for digital content or services: Content that shows viewers how to get unauthorized access to content, software, or services that usually require payment.
- Phishing: Content that tries to get or gives instructions for how to get nonpublic personal identifying information from viewers by deceiving them.
- Cryptophishing: Requests for cryptocurrency or cryptocurrency-related wallet details as a part of a phishing scheme.
Learn more about our spam, deceptive practices, and scams policies.
- For more info, review our Illegal or regulated goods or services policy.
Here are some examples of harmful or dangerous content that isn’t allowed on YouTube.
Note: The list below isn't complete.
- Asphyxiation: Any activity that prevents breathing or can lead to suffocation. Examples include:
- Choking, drowning, or hanging games
- Eating non-food items
- Misuse of weapons: Using weapons, like guns or knives, without proper safety precautions or in a way that could cause physical harm. Examples include the "No Lackin'" challenge.
- Ingesting harmful substances: Eating, consuming, or inserting non-food objects or chemicals that may cause illness or poisoning. Examples include detergent-eating challenges.
- Burning, freezing, and electrocution: Activities with a serious risk of severe burns, freezing, frostbite, or electrocution. Examples include the fire challenge and the hot water challenge.
- Mutilation and blunt force trauma: Examples include:
- Self-mutilation
- Abstaining from normal health practices
- Falling, impalement, collision, blunt force trauma, or crushing
Note: We may age-restrict content that has educational or documentary context.
- Intentional physical harm: Inflicting physical harm on unsuspecting prank victims. Examples include:
- Punching attacks
- Drugging food or drinks with laxatives
- Electrocution pranks
- Making someone feel like they’re in immediate danger: Tricking others into believing they're in real danger, even if no physical harm comes to them. Examples include:
- Threats with weapons
- Bomb scares
- Swatting or fake 911 calls
- Fake home invasions or robberies
- Fake kidnapping
- Emotional distress to minors/vulnerable individuals: Any prank that causes emotional distress, or fear of a threat to physical safety, to children or others who are vulnerable. Examples include:
- Fake death or suicide
- Fake violence
- Pretending that a parent or caregiver will abandon a child
- Showing a parent or caregiver verbally abuse or shame a child
Note: We may age-restrict prank content that involves adults and that doesn't violate our policies.
- Risking serious harm or death: Behavior that shows adults risking serious bodily harm or death, particularly if someone watching could imitate the dangerous act or if the content encourages or praises the dangerous act. Dangerous acts include, but are not limited to, any act in the categories listed under Extremely Dangerous challenges above, such as acts that risk asphyxiation or electrocution.
- We also don’t allow content showing dangerous acts by minors. This includes content that shows minors:
- Drinking alcohol
- Using vaporizers, e-cigarettes, tobacco, or marijuana
- Misusing fireworks
- Using firearms unsupervised
- We also don’t allow content showing dangerous acts by minors. This includes content that shows minors:
- Extremely dangerous driving: Using a motor vehicle in a way that presents imminent risk of serious injury or death to the driver or others. Examples include:
- Cell phone footage of a motorcyclist deliberately veering into oncoming traffic at high speeds. A voiceover reacts saying “Wow, that was crazy!”
- Driving a car at a high speed along a walkway meant for pedestrians.
- Bomb-making: Showing viewers how to build a bomb meant to injure or kill others. Examples include:
- Pipe bombs
- Package bombs
- Explosive vests
- Molotov cocktails
- Violence involving children: Any real fights or violence between children. For more info, review our Child Safety policy.
Note: We may age-restrict content that has documentary or educational context.
Note: The list below isn't complete.
Sometimes content doesn't violate our policies, but it may not be appropriate for viewers under 18.
We may restrict rather than remove if content showing a dangerous act meets one or more of the following criteria:
- There is educational, documentary, scientific, or artistic context, such as providing information about the risks of the act. For example, this could include context explaining the types of injuries that can happen as a result of doing the dangerous act, or describing your own experience being injured as a result of the dangerous act. It could also include context explaining the types of precautions or training required to do the act safely and prevent injury. Saying, “Don’t try this at home” is not sufficient context.
- The act shown does not risk serious injury.
- The content does not promote the act shown. Promotion includes any form of encouragement or praise of the act, or providing instructions on how to complete the act.
Learn about age-restricted content and how to watch age-restricted videos.
- Prank content featuring adults using excessive fake blood or gruesome fake injuries.
- Showing footage of people engaging in a dangerous challenge, with commentary describing the number of people seriously injured by the challenge.
- Content that shows adults misusing fireworks.
- Content that shows adults using tasers on other willing participants or themselves.
- Content that shows an adult parkour athlete reacting to videos of highly dangerous amateur stunts and commenting about the risk of harm.
Sometimes, content that would otherwise violate this policy is allowed to stay on YouTube when it has Educational, Documentary, Scientific, or Artistic (EDSA) context. Learn about how YouTube evaluates EDSA content.
Note: In some cases, EDSA content may be age-restricted. Certain content isn’t allowed on YouTube even if it has EDSA context added, such as content that sells drugs or regulated pharmaceuticals without a prescription.
- A news piece on the dangers of choking games may be appropriate, but posting clips out of context from the same documentary might not be.
- A video in which a professional stunt person performs a dangerous motorcycle jump that shows viewers the safety precautions taken in preparation, like onsite emergency medical personnel and the use of protective equipment.
- A documentary that shows the impact of drug use in a particular community that, while showing viewers drug usage, discourages viewers from using drugs and doesn’t provide information on how to make or purchase them.
- A video that shows dangerous driving or vehicle crashes in controlled environments to educate viewers about safe driving practices or vehicle safety features.
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
Familiarize yourself with the rest of our Community Guidelines.
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
Note: On June 5, 2019, we announced some changes to our hate speech policies. You can learn more about those changes here. The below policy has been updated with those changes.
Hate speech is not allowed on YouTube. We don’t allow content that promotes violence or hatred against individuals or groups based on any of the following attributes, which indicate a protected group status under YouTube’s policy:
- Age
- Caste
- Disability
- Ethnicity
- Gender Identity and Expression
- Nationality
- Race
- Immigration Status
- Religion
- Sex/Gender
- Sexual Orientation
- Victims of a major violent event and their kin
- Veteran Status
If you find content that violates this policy, report it. Instructions for reporting violations of our Community Guidelines are available here. If you've found a few videos or comments that you would like to report, you can report the channel.
Don’t post content on YouTube if the purpose of that content is to do one or more of the following:
- Encourage violence against individuals or groups based on their protected group status. We don’t allow threats on YouTube, and we treat implied calls for violence as real threats. You can learn more about our policies on threats and harassment.
- Incite hatred against individuals or groups based on their protected group status.
- Dehumanization of individuals or groups by calling them subhuman, comparing them to animals, insects, pests, disease, or any other non-human entity based on their protected group status.
- Praise or glorification of violence against individuals or groups based on their protected group status.
- Use of racial, religious, or other slurs and stereotypes that incite or promote hatred based on protected group status. This can take the form of speech, text, or imagery promoting these stereotypes or treating them as factual.
- Claim that individuals or groups are physically or mentally inferior, deficient, or diseased based on their protected group status. This includes statements that one group is less than another, calling them less intelligent, less capable, or damaged. This also includes calls for the subjugation or domination over individuals or groups based on their protected group status.
- Promotion of hateful supremacism by alleging the superiority of a group over those with protected group status to justify violence, discrimination, segregation, or exclusion. This includes content containing hateful supremacist propaganda, such as the recruitment of new members or requests for financial support for their ideology, and music videos promoting hateful supremacism in the lyrics, metadata, or imagery.
- Conspiratorial claims that individuals or groups are evil, corrupt, or malicious based on their protected group status.
- Denial or minimization of a well-documented, major violent event or the victimhood of such an event.
- Attacks on individuals or groups based on their emotional, romantic, and/or sexual attraction to other people.
We may allow content that includes hate speech if that content includes additional educational, documentary, scientific, or artistic context. Additional context may include condemning, refuting, including opposing views, or satirizing hate speech. This is not a pass to promote hate speech. Examples include:
- A documentary about a hate group: Educational content that isn’t supporting the group or promoting ideas would be allowed. A documentary promoting violence or hatred wouldn’t be allowed.
- A documentary about the scientific study of humans: A documentary about how theories have changed over time, even if it includes theories about the inferiority or superiority of specific groups, would be allowed because it’s educational. We won’t allow a documentary claiming there's scientific evidence today that an individual or group is inferior or subhuman.
- Historical footage of an event, like WWII, which doesn't promote violence or hatred.
This policy applies to videos, video descriptions, comments, live streams, and any other YouTube product or feature. Please note these policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video, as well as other forms.
For educational, documentary, scientific, or artistic content that includes hate speech, this context must appear in the images or audio of the video itself. Providing it in the title or description is insufficient.
In some rare cases, we may remove content or issue other penalties when a creator:
- Repeatedly encourages abusive audience behavior.
- Repeatedly targets, insults, and abuses a group based on protected group status across multiple uploads.
- Exposes a group with protected group status to risks of physical harm based on the local social or political context.
- Creates content that harms the YouTube ecosystem by persistently inciting hostility against a group with protected group status for personal financial gain.
Here are examples of hate speech not allowed on YouTube.
- “I’m glad this [violent event] happened. They got what they deserved [referring to people with protected group status].”
- “[People with protected group status] are dogs” or “[people with protected group status] are like animals.”
- “Get out there and punch a [person with protected group status].”
- “Everyone in [groups with protected group status] are criminals and thugs.”
- “[Person with protected group status] is scum of the earth.”
- “[People with protected group status] are a disease.”
- “[People with protected group status] are less intelligent than us because their brains are smaller.”
- “[Group with protected group status] threaten our existence, so we should drive them out at every chance we get.”
- “[Group with protected group status] have an agenda to run the world and get rid of us.”
- “[Protected group status] is just a form of mental illness that needs to be cured.”
- “[Person with protected group status] shouldn't be educated in schools because they shouldn't be educated at all.”
- “All of the so-called victims of this violent event are actors. No one was hurt, and this is just a false flag.”
- “People died in the event, but a truly insignificant number.”
- Shouting “[people with protected group status] are pests!” at someone regardless of whether the person does or does not have the alleged protected group status.
- Video game content which has been developed or modified (“modded”) to promote violence or hatred against a group with any of the attributes noted above.
Please remember these are just some examples, and don't post content if you think it might violate this policy.
- Amadeu Antonion Stiftung
- Freiwillige Selbstkontrolle Multimedia-Diensteanbieter e. V.
- Jugendschutz.net
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
If we think your content comes close to hate speech, we may limit YouTube features available for that content. You can learn more about limited features here.
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
Content intended to praise, promote, or aid violent extremist or criminal organizations is not allowed on YouTube. These organizations are not allowed to use YouTube for any purpose, including recruitment.
If you find content that violates this policy, report it. Instructions for reporting violations of our Community Guidelines are available here. If you've found a few videos or comments that you would like to report, you can report the channel.
If you believe anyone is in immediate danger, you should reach out to your local law enforcement agency to report the situation immediately.
Don’t post content on YouTube if it fits any of the descriptions noted below.
- Content produced by violent extremist, criminal, or terrorist organizations
- Content praising or memorializing prominent terrorist, extremist, or criminal figures in order to encourage others to carry out acts of violence
- Content praising or justifying violent acts carried out by violent extremist, criminal, or terrorist organizations
- Content aimed at recruiting new members to violent extremist, criminal, or terrorist organizations
- Content depicting hostages or posted with the intent to solicit, threaten, or intimidate on behalf of a criminal, extremist, or terrorist organization
- Content that depicts the insignia, logos, or symbols of violent extremist, criminal, or terrorist organizations in order to praise or promote them
- Content that glorifies or promotes violent tragedies, such as school shootings
YouTube relies on many factors, including government and international organization designations, to determine what constitutes criminal or terrorist organizations. For example, we terminate any channel where we have reasonable belief that the account holder is a member of a designated terrorist organization, such as a Foreign Terrorist Organization (U.S.), or organization identified by the United Nations.
If posting content related to terrorism or crime for an educational, documentary, scientific, or artistic purpose, be mindful to provide enough information in the video or audio itself so viewers understand the context. Graphic or controversial footage with sufficient context may be subject to age-restrictions or a warning screen.
This policy applies to videos, video descriptions, comments, live streams, and any other YouTube product or feature. Keep in mind that this isn't a complete list. Please note these policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video, as well as other forms.
Here are some examples of content that’s not allowed on YouTube.
- Raw and unmodified reuploads of content created by terrorist, criminal, or extremist organizations
- Celebrating terrorist leaders or their crimes in songs or memorials
- Celebrating terrorist or criminal organizations in songs or memorials
- Content directing users to sites that espouse terrorist ideology, are used to disseminate prohibited content, or are used for recruitment
- Footage filmed by the perpetrator during a deadly or major violent event, in which weapons, violence, or injured victims are visible or audible
- Links to external sites that contain manifestos of violent attackers
- Video game content which has been developed or modified (“modded”) to glorify a violent event, its perpetrators, or support violent criminal or terrorist organizations
- Glorifying violence against civilians
- Fundraising for violent criminal, extremist, or terrorist organizations
Please remember these are just some examples, and don't post content if you think it might violate this policy.
If your content violates this policy, we'll remove the content and send you an email to let you know.
If this is your first time violating our Community Guidelines, you'll likely get a warning. If it’s not, we may issue a strike against your channel. If you get 3 strikes within 90 days, your channel will be terminated. You can learn more about our strikes system here.
Violations may result in monetization being disabled on any of your accounts in accordance with our YouTube channel monetization policies. This can include warnings. If you feel this is a mistake, you can appeal. If the violation is overturned, you can apply for monetization once you're eligible in YouTube Studio.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
Violent or gory content intended to shock or disgust viewers, or content encouraging others to commit violent acts, are not allowed on YouTube.
If you believe anyone is in imminent danger, you should get in touch with your local law enforcement agency to report the situation immediately.
If you find content that violates this policy, report it. Instructions for reporting violations of our Community Guidelines are available here. If you've found a few videos or comments that you would like to report, you can report the channel.
Don’t post content on YouTube if it fits any of the descriptions noted below.
Violent or graphic content:
- Inciting others to commit violent acts against individuals or a defined group of people.
- Fights involving minors.
- Footage, audio, or imagery involving road accidents, natural disasters, war aftermath, terrorist attack aftermath, street fights, physical attacks, immolation, torture, corpses, protests or riots, robberies, medical procedures, or other such scenarios with the intent to shock or disgust viewers.
- Footage or imagery showing bodily fluids, such as blood or vomit, with the intent to shock or disgust viewers.
- Footage of corpses with massive injuries, such as severed limbs.
Animal abuse content:
- Content where humans coerce animals to fight.
- Content where a human maliciously mistreats an animal and causes them to experience distress outside of traditional or standard practices. Examples of traditional or standard practices include hunting or food preparation.
- Content where a human unnecessarily puts an animal in poor conditions outside of traditional or standard practices. Examples of traditional or standard practices include hunting or food preparation.
- Content that glorifies or promotes serious neglect, mistreatment, or harm toward animals.
- Content that shows animal rescue that is staged and puts the animal in harmful scenarios.
- Graphic content that features animals and intends to shock or disgust.
Dramatized or fictional content:
- Dramatized or fictional footage of content prohibited by these guidelines where the viewer is not given enough context to understand that the footage is dramatized or fictional.
Note that we do not allow the following kinds of content even if there's educational, documentary, scientific, or artistic context provided:
- Violent physical sexual assaults (video, still imagery, or audio).
- Footage filmed by the perpetrator during a deadly or major violent event, in which weapons, violence, or injured victims are visible or audible.
Please note that this is not a complete list.
Keep in mind that this policy also applies to videos, video descriptions, thumbnails, comments, live streams, and any other YouTube product or feature. Please note these policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video, as well as other forms.
Educational content
We may allow the kinds of violent or graphic content noted above in some cases in educational, documentary, scientific, or artistic content. This is not a pass to upload content meant to shock or disgust, or encourage others to commit violent acts, and we do not make these exceptions for certain kinds of content, like footage of violent physical sexual assault. For educational content containing the kinds of violent or graphic content noted above, this context must appear in the images or audio of the video itself. Providing it in the title or description is insufficient.
For educational, documentary, scientific, or artistic content that has adult material or graphic violence, we may take into account official third-party industry ratings to decide whether the content may remain on YouTube. Content that follows our policies but isn’t appropriate for all audiences is age-restricted. Age-restricted content isn’t viewable to anyone who’s under 18 years of age or signed out.
Age-restricted content
We may apply an age-restriction rather than remove violent or graphic content if that content provides enough context to understand it. For example, content showing victims’ injuries in a road accident may be removed, but we may age-restrict that same content if presented with news coverage that explains the situation and context. For educational use of violent or graphic content, this context must appear in the images or audio of the video itself. You can learn more about the importance of context here.
We also consider public interest when deciding whether content should be removed or age-restricted. For example, we may age-restrict graphic or violent content documenting warzones.
We may also age-restrict fictional violence when it contains graphic scenes, such as people being dismembered or decapitated, or shows human corpses with these severe injuries. Generally, we allow dramatized violence when the content or metadata lets us know that the content is fictional, or when it’s apparent from the content itself, such as animated content or video games.
We consider the following when deciding whether to age-restrict or remove content. Note that this isn’t a complete list:
- Whether violent or gory imagery is the focus of the video. For example, the video focuses solely on the most graphically violent part of a film or video game.
- Whether the title, description, tags, or other data show an intent to shock or disgust viewers.
- Whether violent imagery or audio has been blurred, masked, or obscured.
- The amount of time the violent images or audio is in the content.
- Whether there’s context that lets viewers know that the imagery is dramatized or fictional. For example, through info in the video, title, or description.
- Whether the violence is part of a religious or cultural practice, and the uploader gives viewers that context.
- Whether the content shows the killing of an animal via traditional or standard practices for hunting, religious practice, or food preparation.
This policy applies to videos, video descriptions, comments, live streams, and any other YouTube product or feature.
Here are some examples of content that’s not allowed on YouTube.
- Encouraging others to go to a particular place to commit violence, to perform violence at a particular time, or to target individuals or groups with violence.
- Actual schoolyard fights between minors. We may allow content if minors are only play fighting and that is evident to viewers.
- Beatings or brawls outside the context of professional or professionally supervised sporting events.
Violent or graphic content
The following types of content are not allowed on YouTube. This is not a complete list.
- Medical procedure footage where the content focuses on open wounds and provides no education or explanation to viewers.
- Footage of crimes such as violent robberies that provide no education or explanation to viewers.
- Cell phone, dash cam, or closed circuit TV footage showing the injured or killed in a road accident accompanied by titles such as “Crazy accident” or “Warning: Lots of blood.”
- Videos of beheadings.
- One-sided assaults with titles like "Watch this guy get beat-up!".
Animal abuse content
Animal abuse refers to content that shows the malicious infliction of serious physical or psychological harm that causes an animal to suffer. We may make exceptions for content that shows widely accepted practices, like hunting, trapping, pest abatement, food preparation, medical treatment, or animal slaughter that shows harm to an animal or group of animals.
Here are more examples of content that’s not allowed on YouTube:
- Dog fighting, cockfighting, or other coerced animal fighting where humans force animals to attack each other. We do allow content that shows animals fighting in the wild, like in a nature documentary.
- Content that shows animal suffering, neglect, or mistreatment to shock the viewer or glorify the abuse, and doesn’t give enough educational, documentary, scientific, or artistic context.
- Bullfighting with bulls being harmed, like swords in the bull.
- Hunting using non-standard practices, like bombs or poison.
- The staged rescue of animals where the animals are intentionally harmed or placed in dangerous scenarios for dramatic effect.
The above list isn’t complete.
Remember these examples are just some examples, and don’t post content if you think it might violate this policy.
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
Starting June 18, 2024, certain content showing how to remove safety devices will be prohibited. Content showing the use of homemade firearms, automatic firearms, and certain firearm accessories will be age restricted.
Content intended to sell firearms, instruct viewers on how to make firearms, ammunition, and certain accessories, or instruct viewers on how to install those accessories is not allowed on YouTube. YouTube shouldn't be used as a platform to sell firearms or accessories noted below. YouTube also doesn’t allow live streams that show someone holding, handling, or transporting a firearm.
Sometimes content doesn't violate our policies, but it may not be appropriate for viewers under 18. YouTube age restricts content showing the use of certain firearms and accessories also noted below (note: this restriction applies to real use of firearms only; details are below).
Don’t post content on YouTube if the purpose is to do one or more of the following:
- Sell firearms or certain firearms accessories through direct sales (e.g. private sales by individuals) or links to sites that sell these items. These accessories may include:
- Accessories that enable a firearm to simulate automatic fire,
- Accessories that convert a firearm to automatic fire, such as: bump stocks, gatling triggers, drop-in auto sears, or conversion kits,
- High capacity magazines or belts carrying more than 30 rounds.
- Provide instructions on manufacturing any of the following:
- Firearms,
- Ammunition,
- High capacity magazines,
- Homemade silencers/suppressors,
- Accessories that enable a firearm to simulate automatic fire,
- Accessories that convert a firearm to automatic fire, such as: bump stocks, gatling triggers, drop-in auto sears, or conversion kits.
- Provide instructions on how to convert a firearm to automatic or simulated automatic firing capabilities.
- Provides instructions on how to install the above-mentioned accessories or modifications.
- Provides instructions on how to remove certain firearm safety devices, such as a device that limits the release of a magazine. This does not include removal of a device used to temporarily disable a weapon like a gun lock.
Please note this is not a complete list.
Sometimes content doesn't violate our policies, but it may not be appropriate for viewers under 18.
- Content showing use of a homemade firearm (e.g. 3D printed gun), an automatic firearm, or any of the below accessories:
- Accessories that enable a firearm to simulate automatic fire
- Accessories that convert a firearm to automatic fire, such as: bump stocks, gatling triggers, drop-in auto sears, or conversion kits
- High capacity magazines
- Homemade silencers/suppressors
- Examples (non-exhaustive):
- Firing a 3D printed firearm
- Firing a fully automatic rifle
- Firing a firearm with a high capacity magazine
These guidelines apply to real use of firearms and may not apply, for example, to use of firearms in artistic content such as a film. We may also make exceptions for public interest content such as military or police footage, news footage, or footage from warzones.
Here are some examples of content that isn’t allowed on YouTube.
-
Links in the title or description of your video to sites where firearms or the accessories noted above are sold. You can link to sites that discuss or review the items as long as those sites don’t sell or give away those items directly.
-
Displaying a firearm with the intention to sell that firearm via private sale. This includes giving the seller’s phone number, email address, or other contact information.
-
Showing users step-by-step instructions on how to finish a lower receiver in order to complete fabrication of a firearm.
-
Showing users how to make a silencer out of a flashlight, oil can, solvent catcher or other parts.
-
Showing users how to install a bump stock, or install a comparable accessory built to enable simulated automatic fire.
-
Live streams that feature someone holding or handling a firearm, regardless of whether or not they are firing it. Note: this does not include firearms in video games.
-
Live streams that feature someone transporting firearms from place to place, such as by carrying them or traveling with them by car, truck, or other vehicle. Note: this does not include firearms in video games.
Please remember these are just some examples, and don't post content if you think it might violate this policy.
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
If you find content that violates this policy, report it. Instructions for reporting violations of our Community Guidelines are available here. If you've found a few videos or comments that you would like to report, you can report the channel.
The safety of our creators, viewers, and partners is our highest priority. We look to each of you to help us protect this unique and vibrant community. It’s important you understand our Community Guidelines, and the role they play in our shared responsibility to keep YouTube safe. T****ake the time to carefully read the policy below. You can also check out this page for a full list of our guidelines.
Content intended to sell certain regulated goods and services is not allowed on YouTube.
If you find content that violates this policy, report it. Instructions for reporting violations of our Community Guidelines are available here. If you've found a few videos or comments that you would like to report, you can report the channel.
Don't post content on YouTube if it aims to directly sell, link to, or facilitate access to any of the regulated goods and services listed below. Making the sale of these items or facilitating the use of these services possible by posting links, email, phone number or other means to contact a seller directly is not allowed.
- Alcohol
- Bank account passwords, stolen credit cards, or other financial information
- Counterfeit documents or currency
- Controlled narcotics and other drugs
- Explosives
- Organs
- Endangered species or parts of endangered species
- Firearms and certain firearms accessories
- Nicotine, including vaping products
- Online gambling sites not yet reviewed by Google or YouTube
- Pharmaceuticals without a prescription
- Sex or escort services
- Unlicensed medical services
- Human smuggling
Note: If you're providing links or contact information such as phone numbers, emails, or other means of contact where hard drugs or certain poisonous substances can be purchased, or where pharmaceuticals can be purchased without a prescription, your channel may be terminated. See examples below.
Additionally, the following content isn’t allowed on YouTube:
- Hard drug use or creation: Hard drug use or creation, selling or facilitating the sale of hard or soft drugs, facilitating the sale of regulated pharmaceuticals without a prescription, or showing how to use steroids in non-educational content.
- Poison sale or creation: Facilitating the sale, giveaway, creation or modification of certain poisons or poisonous substances.
- Instructional cheating: Content which provides instructions for academic cheating.
This policy applies to videos, video descriptions, comments, live streams, and any other YouTube product or feature. Keep in mind that this isn't a complete list. Please note these policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video, as well as other forms.
Here are some examples of content that’s not allowed on YouTube.
Keep in mind that this isn’t a complete list.
- Linking to an online gambling or sports betting site that is not approved.
- Selling counterfeit passports or providing instructions on creating forged official documents.
- Advertising escort, prostitution, or erotic massage services.
- Content instructing how to purchase drugs on the dark web.
- A video of a user making a purchase with software that generates fake credit card numbers.
- Including a link to an online pharmacy that does not require prescriptions.
- Content that promotes a product that contains drugs, nicotine, or a controlled substance.
- Displays of hard drug use: Non-educational content that shows the injection of intravenous drugs like heroin, huffing/sniffing glue, or taking tabs of acid.
- Making hard drugs: Non-educational content that explains how to make drugs.
- Minors using alcohol or drugs: Showing minors drinking alcohol, using vaporizers, e-cigarettes, tobacco or marijuana, or misusing fireworks.
- Steroid use: Non-educational content that shows how to use steroids for recreational purposes, like bodybuilding.
- Selling soft drugs: Such as providing links to sites facilitating sale of marijuana or salvia.
- Selling hard drugs: Featuring hard drugs with the goal of selling them. Some types of hard drugs include (note that this is not a complete list, and these substances may also be known under different names):
- Amphetamine
- Cocaine
- Dextromethorphan (DXM)
- Flunitrazepam
- Fentanyl
- GHB
- Heroin
- Ketamine
- K2
- LSD
- MDMA/ecstasy
- Mescaline
- Methamphetamine
- Isotonitazene (ISO)
- Opium
- PCP
- Psilocybin & Psilocybe (magic mushrooms)
- Sale of certain poisonous substances. Some examples include (note that this is not a complete list, and these substances may also be known under different names):
- Cyanide
- Chloroform
- Mercury
- Instructions to make certain poisonous substances: Non-educational content that explains how to make poisonous substances.
Note: If you're providing links or contact information such as phone numbers, emails, or other means of contact where hard drugs or poisonous substances can be purchased, or where pharmaceuticals can be purchased without a prescription, your channel may be terminated.
Please remember these are just some examples, and don't post content if you think it might violate this policy.
Sometimes content doesn't violate our policies, but it may not be appropriate for viewers under 18.
- Content that promotes a cannabis dispensary.
- Content that reviews brands of nicotine e-liquid.
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
Certain types of misleading or deceptive content with serious risk of egregious harm are not allowed on YouTube. This includes certain types of misinformation that can cause real-world harm, certain types of technically manipulated content, or content interfering with democratic processes.
If you find content that violates this policy, report it. Instructions for reporting violations of our Community Guidelines are available here. If you've found multiple videos or comments that you would like to report, you can report the channel.
Don’t post content on YouTube if it fits any of the descriptions below.
- Suppression of census participation: Content aiming to mislead census participants about the time, place, means, or eligibility requirements of the census, or false claims that could materially discourage census participation.
- Manipulated content: Content that has been technically manipulated or doctored in a way that misleads users (usually beyond clips taken out of context) and may pose a serious risk of egregious harm.
- Misattributed content: Content that may pose a serious risk of egregious harm by falsely claiming that old footage from a past event is from a current event.
Here are some examples of content that’s not allowed on YouTube.
Suppression of census participation
- Giving incorrect instructions on how to participate in the census.
- Discouraging participation in the census by falsely claiming that a respondent’s immigration status will be reported to law enforcement.
Manipulated content
- Inaccurately translated video subtitles that inflame geopolitical tensions creating serious risk of egregious harm.
- Videos that have been technically manipulated (usually beyond clips taken out of context) to make it appear that a government official is dead.
- Video content that has been technically manipulated (usually beyond clips taken out of context) to fabricate events where there’s a serious risk of egregious harm.
Misattributed content
- Content inaccurately presented as documenting human rights abuses in a specific location that is actually content from another location or event.
- Content showing a military crackdown on protesters with false claims that the content is from a current event, when the footage is actually several years old.
Remember these are just some examples, and don't post content if you think it might violate these policies. Please note these policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video, as well as other forms.
We may allow content that violates the misinformation policies noted on this page if that content includes additional context in the video, audio, title, or description. This is not a pass to promote misinformation. We may make exceptions if the purpose of the content is to condemn, dispute, or satirize misinformation that violates our policies.
We also allow personal expressions of opinion on the above topics as long as they don’t otherwise violate any of the policies outlined above.
If your content violates this policy, we’ll remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link.
If this is your first time violating our Community Guidelines, you’ll likely get a warning with no penalty to your channel. If it’s not, we may issue a strike against your channel. If you get 3 strikes within 90 days, your channel will be terminated. You can learn more about our strikes system here.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. You can learn more about channel or account terminations here.
On June 2, 2023, we updated how this policy applies to past US election outcomes. Learn more in our blog.
Certain types of misleading or deceptive content with serious risk of egregious harm are not allowed on YouTube. This includes certain types of misinformation that can cause real-world harm, like certain types of technically manipulated content, and content interfering with democratic processes.
Elections Misinformation Policy: YouTube Community Guidelines
If you find content that violates this policy, report it. Instructions for reporting violations of our Community Guidelines are available here. If you've found multiple videos or comments from a single channel that you would like to report, you can report the channel.
These policies prohibit certain types of content relating to free and fair democratic elections. Don’t post elections-related content on YouTube if it fits any of the descriptions noted below.
- Voter suppression: Content aiming to mislead voters about the time, place, means, or eligibility requirements for voting, or false claims that could materially discourage voting.
- Candidate eligibility: Content that advances false claims related to the technical eligibility requirements for current political candidates and sitting elected government officials to serve in office. Eligibility requirements considered are based on applicable national law, and include age, citizenship, or vital status.
- Incitement to interfere with democratic processes: Content encouraging others to interfere with democratic processes. This includes obstructing or interrupting voting procedures.
- Election integrity: Content advancing false claims that widespread fraud, errors, or glitches occurred in certain past elections to determine heads of government. Or, content that claims that the certified results of those elections were false. This policy currently applies to:
- The 2021 German federal election
- The 2014, 2018, and 2022 Brazilian Presidential elections
Keep in mind that this isn't a complete list.
The following types of content are not allowed on YouTube. This isn't a complete list.
Voter suppression
- Telling viewers they can vote through inaccurate methods like texting their vote to a particular number.
- Giving made up voter eligibility requirements like saying that a particular election is only open to voters over 50 years old.
- Telling viewers an incorrect voting date.
- Claiming that a voter’s political party affiliation is visible on a vote-by-mail envelope.
- False claims that non-citizen voting has determined the outcome of past elections.
- False claims that Brazilian electronic voting machines have been hacked in the past to change an individual’s vote.
Candidate eligibility
- Claims that a candidate or sitting government official is not eligible to hold office based on false info about the age required to hold office in that country/region.
- Claims that a candidate or sitting government official is not eligible to hold office based on false info about citizenship status requirements to hold office in that country/region.
- Claims that a candidate or sitting government official is ineligible for office based on false claims that they’re deceased, not old enough or otherwise do not meet eligibility requirements.
Incitement to interfere with democratic processes
- Telling viewers to create long voting lines with the purpose of making it harder for others to vote.
- Telling viewers to hack government websites to delay the release of elections results.
- Telling viewers to incite physical conflict with election officials, voters, candidates, or other individuals at polling locations to deter voting.
Election integrity
- Content advancing false claims that widespread fraud, error, or glitches changed the outcome of the German parliamentary (Bundestag) elections, delegitimizes the formation of the new government or the election and appointment of the next German Chancellor.
- False claims that widespread fraud, error, or glitches changed the outcome of the 2018 Brazilian presidential election.
Sometimes, content that would otherwise violate this policy is allowed to stay on YouTube when it has Educational, Documentary, Scientific, or Artistic (EDSA) context in the video, audio, title, or description. This is not a pass to promote misinformation. Additional context may include countervailing views, or if the content condemns, disputes, or satirizes misinformation that violates our policies. Learn about how YouTube evaluates EDSA content.
Elections-related content is also subject to other Community Guidelines. This could include, for example:
- Content that threatens individuals such as election workers, candidates, or voters isn’t allowed under our Harassment & cyberbullying policies.
- Content that has been technically manipulated or doctored in a way that misleads users - usually beyond clips taken out of context - and may pose a serious risk of egregious harm isn’t allowed under our Misinformation policies. For example, footage that has been technically manipulated to make a candidate for public office falsely claim they’re dropping out of the race.
- Content that may pose a serious risk of egregious harm by falsely claiming that old footage from a past event is from a current event isn’t allowed under our Misinformation policies. For example, a video that shows a head of state condoning a violent conflict that he or she never actually condoned.
- Content that encourages others to commit violent acts, including acts targeting election workers, candidates, or voters isn’t allowed under our Violent or graphic content policies.
- Content that promotes violence or hatred against individuals or groups based on certain attributes isn’t allowed under our Hate speech policies. This includes, for example, content that shows a political rally attendee dehumanizing a group based on a protected attribute, such as race, religion, or sexual orientation.
- Content that’s intended to impersonate a person or channel, such as a political candidate or their political party, isn’t allowed under our Impersonation policy.
- Content that contains external links to material that would violate our policies and can cause a serious risk of egregious harm, like misleading or deceptive content relating to an election, hate speech targeting protected groups, or harassment targeting election workers, candidates, or voters. This can include clickable URLs, verbally directing users to other sites in a video, and other forms of link-sharing.
Remember these are just some examples, and don't post content if you think it might violate these policies. Advertiser-friendly content guidelines also apply. Please note these policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video, as well as other forms.
If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link. Note that violative URLs posted within the video itself or in the video’s metadata may result in the video being removed.
If this is your first time violating our Community Guidelines, you'll likely get a warning with no penalty to your channel. You will have the chance to take a policy training to allow the warning to expire after 90 days. The 90 day period starts from when the training is completed, not when the warning is issued. However, if the same policy is violated within that 90 day window, the warning will not expire and your channel will be given a strike. If you violate a different policy after completing the training, you will get another warning.
If you get 3 strikes within 90 days, your channel will be terminated. Learn more about our strikes system.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. We may prevent repeat offenders from taking policy trainings in the future. Learn more about channel or account terminations.
YouTube doesn't allow content that poses a serious risk of egregious harm by spreading medical misinformation that contradicts local health authorities’ (LHAs) or the World Health Organization’s (WHO) guidance about specific health conditions and substances. This policy includes the following categories:
- Prevention misinformation
- Treatment misinformation
- Denial misinformation
Note: YouTube’s medical misinformation policies are subject to change in response to changes to guidance from health authorities or WHO. There may be a delay between new LHAs/WHO guidance and policy updates, and our policies may not cover all LHA/WHO guidance related to specific health conditions and substances.
Don’t post content on YouTube if it includes any of the following:
Prevention misinformation: We do not allow content that promotes information that contradicts health authority guidance on the prevention or transmission of specific health conditions, or on the safety, efficacy or ingredients of currently approved and administered vaccines.
Treatment misinformation: We do not allow content that promotes information that contradicts health authority guidance on treatments for specific health conditions, including promotion of specific harmful substances or practices that have not been approved by local health authorities or the World Health Organization as safe or effective, or that have been confirmed to cause severe harm.
Denial misinformation: We do not allow content that denies the existence of specific health conditions.
These policies apply to videos, video descriptions, comments, live streams, and any other YouTube product or feature. Keep in mind that this isn't a complete list. Please note these policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video, as well as other forms.
Here are some examples of content that’s not allowed on YouTube. This isn't a complete list.
Harmful substances & practices as prevention methods
- Promotion of the following substances and treatments that present an inherent risk of severe bodily harm or death:
- Miracle Mineral Solution (MMS)
- Black Salve
- Turpentine
- B17/amygdalin/peach or apricot seeds
- High-grade hydrogen peroxide
- Chelation therapy to treat autism
- Colloidal silver
- Gasoline, diesel and kerosene
- Content that promotes use of Ivermectin or Hydroxychloroquine for the prevention of COVID-19.
Guaranteed prevention misinformation
- Claims that there is a guaranteed prevention method for COVID-19.
- Claims that any medication or vaccination is a guaranteed prevention method for COVID-19.
Vaccine misinformation
- Claims that contradict health authority and World Health Organization guidance on safety, efficacy and ingredients of currently administered and approved vaccines.
- Vaccine safety: Content alleging that vaccines cause chronic side effects, such as cancer or paralysis, outside of rare side effects that are recognized by health authorities.
- Examples:
- Claims that the MMR vaccine causes autism.
- Claims that any vaccine causes contraction of COVID-19.
- Claims that vaccines are part of a depopulation agenda.
- Claims that the flu vaccine causes chronic side effects such as infertility, or causes contraction of COVID-19.
- Claims that the HPV vaccine causes chronic side effects such as paralysis.
- Claims that an approved COVID-19 vaccine will cause death, infertility, miscarriage, autism, or contraction of other infectious diseases.
- Claims that achieving herd immunity through natural infection is safer than vaccinating the population.
- Content that promotes the use of unapproved or homemade COVID-19 vaccines.
- Examples:
- Vaccine efficacy: Content claiming that vaccines do not reduce transmission or contraction of disease.
- Examples:
- Claims that vaccines do not reduce risk of contracting illness.
- Claims that vaccines do not reduce the severity of illness, including hospitalization or death.
- Claims that any vaccine is a guaranteed prevention method for COVID-19.
- Examples:
- Ingredients in vaccines: Content misrepresenting the ingredients contained in vaccines.
- Examples:
- Claims that vaccines contain substances that are not on the vaccine ingredient list, such as biological matter from fetuses (e.g. fetal tissue, fetal cell lines) or animal byproducts.
- Claims that vaccines contain substances or devices meant to track or identify those who’ve received them.
- Claims that vaccines alter a person’s genetic makeup.
- Claims that vaccines will make people who receive them magnetic.
- Examples:
- Vaccine safety: Content alleging that vaccines cause chronic side effects, such as cancer or paralysis, outside of rare side effects that are recognized by health authorities.
Additional resources
More information on vaccines, including their safety and efficacy, can be found below.
Health authority vaccine information:
- Centers for Disease Control and Prevention (CDC) (US)
- European Vaccination Information Portal (EU)
- National Health Service (UK)
- Korea Disease Control and Prevention Agency (Korea)
- National Health Mission (India)
- MHLW Immunization Information (Japan)
- National Vaccination Calendar (Brazil)
- Universal Vaccination Program (Mexico)
- World Health Organization vaccine safety (Global)
- World Health Organization vaccine preventable diseases (Global)
Additional vaccine information:
- American Academy of Pediatrics (US)
- GAVI, the Vaccine Alliance (Global)
- UNICEF (Global)
- Content that promotes transmission information that contradicts local health authorities or the World Health Organization.
- Content that claims that COVID-19 is not caused by a viral infection.
- Claims that COVID-19 is caused by radiation from 5G networks.
- Content that claims COVID-19 is not contagious.
- Content that claims that COVID-19 cannot spread in certain climates or geographies.
- Content that claims that any group or individual has immunity to the virus or cannot transmit the virus.
- Promotion of the following substances and treatments that present an inherent risk of severe bodily harm or death.
- Miracle Mineral Solution (MMS)
- Black salve
- Turpentine
- B17/amygdalin/peach or apricot seeds
- High-grade hydrogen peroxide
- Chelation therapy to treat autism
- Colloidal silver
- Gasoline, diesel and kerosene
- Content that recommends the use of specific methods for the treatment of cancer when those have not been approved by local health authorities or the World Health Organization as safe or effective or have been confirmed to be harmful or ineffective for cancer treatment.
- Examples:
- Content that promotes the use of the following methods for the treatment of cancer, outside of clinical trials:
- Caesium chloride (cesium salts)
- Hoxsey therapy
- Coffee enema
- Gerson therapy
- Content that claims that the following methods are safe or effective for the treatment of cancer, outside of clinical trials:
- Antineoplaston therapy
- Quercetin (intravenous injection)
- Methadone
- Over-the-counter chelation therapy
- Content that promotes the use of the following methods for the treatment of cancer, outside of clinical trials:
- Examples:
- Content that promotes use of Ivermectin or Hydroxychloroquine for the treatment of COVID-19.
Guaranteed treatment misinformation
- Content that claims that there is a guaranteed cure for cancer outside of approved treatment.
- Content that claims that there is a guaranteed cure for COVID-19.
Harmful alternative methods & discouragement of professional treatment
- Content that claims that approved treatments for cancer are never effective.
- Examples:
- Content that claims that approved treatments for cancer, such as chemotherapy or radiation, are never effective.
- Content that discourages people from seeking approved treatments for cancer.
- Examples:
- Claims that alternative treatments are safer or more effective than approved treatments for cancer.
- Content that claims that juicing has better results than chemotherapy in treating cancer.
- Content that recommends alternative treatments in place of approved treatments for cancer.
- Content that promotes diet and exercise instead of seeking approved treatment for cancer.
- Discouraging people from consulting a medical professional or seeking medical advice if they’re sick with COVID-19.
- Content that encourages the use of home remedies, prayer, or rituals in place of medical treatment for COVID-19 such as consulting a doctor or going to the hospital.
- Content that contradicts local health authorities’ or the World Health Organization’s guidance on the safety of chemical and surgical abortion:
- Claims that abortion causes breast cancer.
- Claims that abortion commonly results in or carries a high risk of infertility or future miscarriage.
- Promotion of alternative abortion methods in place of chemical or surgical methods deemed safe by health authorities.
- Promotion of alternative formulas for infants in place of breast milk or commercial formula.
- Content that denies the existence of COVID-19 or that people have died from COVID-19.
- Examples:
- Denial that COVID-19 exists
- Claims that people have not died or gotten sick from COVID-19
- Claims that there have not been cases or deaths in countries where cases or deaths have been confirmed by local health authorities or the WHO
- Examples:
We may allow content that violates the misinformation policies noted on this page if that content includes additional context in the video, audio, title, or description. This is not a pass to promote misinformation. Additional context may include countervailing views from local health authorities or medical experts. We may also make exceptions if the purpose of the content is to condemn, dispute, or satirize misinformation that violates our policies. We may also make exceptions for content discussing the results of a specific medical study, or showing an open public forum, like a protest or public hearing, provided the content does not aim to promote misinformation that violates our policies.
YouTube also believes people should be able to share their own experiences, including personal experiences with vaccinations, for example. This means we may make exceptions for content in which creators describe firsthand experiences from themselves or their family. At the same time, we recognize there is a difference between sharing personal experiences and promoting misinformation. To address this balance, we will still remove content or channels if they include other policy violations or demonstrate a pattern of promoting medical misinformation.
If your content violates this policy, we’ll remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link.
If this is your first time violating our Community Guidelines, you’ll likely get a warning with no penalty to your channel. If it’s not, we may issue a strike against your channel. If you get 3 strikes within 90 days, your channel will be terminated. You can learn more about our strikes system here.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. You can learn more about channel or account terminations here.
YouTube doesn't allow content that poses a serious risk of egregious harm by spreading medical misinformation that contradicts local health authorities’ (LHAs) or the World Health Organization’s (WHO) guidance about specific health conditions and substances. This policy includes the following categories:
- Prevention misinformation
- Treatment misinformation
- Denial misinformation
Note: YouTube’s medical misinformation policies are subject to change in response to changes to guidance from health authorities or WHO. There may be a delay between new LHAs/WHO guidance and policy updates, and our policies may not cover all LHA/WHO guidance related to specific health conditions and substances.
Don’t post content on YouTube if it includes any of the following:
Prevention misinformation: We do not allow content that promotes information that contradicts health authority guidance on the prevention or transmission of specific health conditions, or on the safety, efficacy or ingredients of currently approved and administered vaccines.
Treatment misinformation: We do not allow content that promotes information that contradicts health authority guidance on treatments for specific health conditions, including promotion of specific harmful substances or practices that have not been approved by local health authorities or the World Health Organization as safe or effective, or that have been confirmed to cause severe harm.
Denial misinformation: We do not allow content that denies the existence of specific health conditions.
These policies apply to videos, video descriptions, comments, live streams, and any other YouTube product or feature. Keep in mind that this isn't a complete list. Please note these policies also apply to external links in your content. This can include clickable URLs, verbally directing users to other sites in video, as well as other forms.
Here are some examples of content that’s not allowed on YouTube. This isn't a complete list.
Harmful substances & practices as prevention methods
- Promotion of the following substances and treatments that present an inherent risk of severe bodily harm or death:
- Miracle Mineral Solution (MMS)
- Black Salve
- Turpentine
- B17/amygdalin/peach or apricot seeds
- High-grade hydrogen peroxide
- Chelation therapy to treat autism
- Colloidal silver
- Gasoline, diesel and kerosene
- Content that promotes use of Ivermectin or Hydroxychloroquine for the prevention of COVID-19.
Guaranteed prevention misinformation
- Claims that there is a guaranteed prevention method for COVID-19.
- Claims that any medication or vaccination is a guaranteed prevention method for COVID-19.
Vaccine misinformation
- Claims that contradict health authority and World Health Organization guidance on safety, efficacy and ingredients of currently administered and approved vaccines.
- Vaccine safety: Content alleging that vaccines cause chronic side effects, such as cancer or paralysis, outside of rare side effects that are recognized by health authorities.
- Examples:
- Claims that the MMR vaccine causes autism.
- Claims that any vaccine causes contraction of COVID-19.
- Claims that vaccines are part of a depopulation agenda.
- Claims that the flu vaccine causes chronic side effects such as infertility, or causes contraction of COVID-19.
- Claims that the HPV vaccine causes chronic side effects such as paralysis.
- Claims that an approved COVID-19 vaccine will cause death, infertility, miscarriage, autism, or contraction of other infectious diseases.
- Claims that achieving herd immunity through natural infection is safer than vaccinating the population.
- Content that promotes the use of unapproved or homemade COVID-19 vaccines.
- Examples:
- Vaccine efficacy: Content claiming that vaccines do not reduce transmission or contraction of disease.
- Examples:
- Claims that vaccines do not reduce risk of contracting illness.
- Claims that vaccines do not reduce the severity of illness, including hospitalization or death.
- Claims that any vaccine is a guaranteed prevention method for COVID-19.
- Examples:
- Ingredients in vaccines: Content misrepresenting the ingredients contained in vaccines.
- Examples:
- Claims that vaccines contain substances that are not on the vaccine ingredient list, such as biological matter from fetuses (e.g. fetal tissue, fetal cell lines) or animal byproducts.
- Claims that vaccines contain substances or devices meant to track or identify those who’ve received them.
- Claims that vaccines alter a person’s genetic makeup.
- Claims that vaccines will make people who receive them magnetic.
- Examples:
- Vaccine safety: Content alleging that vaccines cause chronic side effects, such as cancer or paralysis, outside of rare side effects that are recognized by health authorities.
Additional resources
More information on vaccines, including their safety and efficacy, can be found below.
Health authority vaccine information:
- Centers for Disease Control and Prevention (CDC) (US)
- European Vaccination Information Portal (EU)
- National Health Service (UK)
- Korea Disease Control and Prevention Agency (Korea)
- National Health Mission (India)
- MHLW Immunization Information (Japan)
- National Vaccination Calendar (Brazil)
- Universal Vaccination Program (Mexico)
- World Health Organization vaccine safety (Global)
- World Health Organization vaccine preventable diseases (Global)
Additional vaccine information:
- American Academy of Pediatrics (US)
- GAVI, the Vaccine Alliance (Global)
- UNICEF (Global)
- Content that promotes transmission information that contradicts local health authorities or the World Health Organization.
- Content that claims that COVID-19 is not caused by a viral infection.
- Claims that COVID-19 is caused by radiation from 5G networks.
- Content that claims COVID-19 is not contagious.
- Content that claims that COVID-19 cannot spread in certain climates or geographies.
- Content that claims that any group or individual has immunity to the virus or cannot transmit the virus.
- Promotion of the following substances and treatments that present an inherent risk of severe bodily harm or death.
- Miracle Mineral Solution (MMS)
- Black salve
- Turpentine
- B17/amygdalin/peach or apricot seeds
- High-grade hydrogen peroxide
- Chelation therapy to treat autism
- Colloidal silver
- Gasoline, diesel and kerosene
- Content that recommends the use of specific methods for the treatment of cancer when those have not been approved by local health authorities or the World Health Organization as safe or effective or have been confirmed to be harmful or ineffective for cancer treatment.
- Examples:
- Content that promotes the use of the following methods for the treatment of cancer, outside of clinical trials:
- Caesium chloride (cesium salts)
- Hoxsey therapy
- Coffee enema
- Gerson therapy
- Content that claims that the following methods are safe or effective for the treatment of cancer, outside of clinical trials:
- Antineoplaston therapy
- Quercetin (intravenous injection)
- Methadone
- Over-the-counter chelation therapy
- Content that promotes the use of the following methods for the treatment of cancer, outside of clinical trials:
- Examples:
- Content that promotes use of Ivermectin or Hydroxychloroquine for the treatment of COVID-19.
Guaranteed treatment misinformation
- Content that claims that there is a guaranteed cure for cancer outside of approved treatment.
- Content that claims that there is a guaranteed cure for COVID-19.
Harmful alternative methods & discouragement of professional treatment
- Content that claims that approved treatments for cancer are never effective.
- Examples:
- Content that claims that approved treatments for cancer, such as chemotherapy or radiation, are never effective.
- Content that discourages people from seeking approved treatments for cancer.
- Examples:
- Claims that alternative treatments are safer or more effective than approved treatments for cancer.
- Content that claims that juicing has better results than chemotherapy in treating cancer.
- Content that recommends alternative treatments in place of approved treatments for cancer.
- Content that promotes diet and exercise instead of seeking approved treatment for cancer.
- Discouraging people from consulting a medical professional or seeking medical advice if they’re sick with COVID-19.
- Content that encourages the use of home remedies, prayer, or rituals in place of medical treatment for COVID-19 such as consulting a doctor or going to the hospital.
- Content that contradicts local health authorities’ or the World Health Organization’s guidance on the safety of chemical and surgical abortion:
- Claims that abortion causes breast cancer.
- Claims that abortion commonly results in or carries a high risk of infertility or future miscarriage.
- Promotion of alternative abortion methods in place of chemical or surgical methods deemed safe by health authorities.
- Promotion of alternative formulas for infants in place of breast milk or commercial formula.
- Content that denies the existence of COVID-19 or that people have died from COVID-19.
- Examples:
- Denial that COVID-19 exists
- Claims that people have not died or gotten sick from COVID-19
- Claims that there have not been cases or deaths in countries where cases or deaths have been confirmed by local health authorities or the WHO
- Examples:
We may allow content that violates the misinformation policies noted on this page if that content includes additional context in the video, audio, title, or description. This is not a pass to promote misinformation. Additional context may include countervailing views from local health authorities or medical experts. We may also make exceptions if the purpose of the content is to condemn, dispute, or satirize misinformation that violates our policies. We may also make exceptions for content discussing the results of a specific medical study, or showing an open public forum, like a protest or public hearing, provided the content does not aim to promote misinformation that violates our policies.
YouTube also believes people should be able to share their own experiences, including personal experiences with vaccinations, for example. This means we may make exceptions for content in which creators describe firsthand experiences from themselves or their family. At the same time, we recognize there is a difference between sharing personal experiences and promoting misinformation. To address this balance, we will still remove content or channels if they include other policy violations or demonstrate a pattern of promoting medical misinformation.
If your content violates this policy, we’ll remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link.
If this is your first time violating our Community Guidelines, you’ll likely get a warning with no penalty to your channel. If it’s not, we may issue a strike against your channel. If you get 3 strikes within 90 days, your channel will be terminated. You can learn more about our strikes system here.
We may terminate your channel or account for repeated violations of the Community Guidelines or Terms of Service. We may also terminate your channel or account after a single case of severe abuse, or when the channel is dedicated to a policy violation. You can learn more about channel or account terminations here.