Best Practices for Setting Up Meta Robots Tags & Robots.txt

Learning how to set up robots.txt and meta robots tags is paramount to success in technical SEO. This short guide will help you implement them correctly.

How to fix excluded by ‘noindex’ tag and edit meta robots tag in WordPress website practical steps

Hello Viewers,

Today in this video we discuss about how to fix excluded by ‘noindex’ tag issue flagged in Google Search Console and practical way to edit meta robots tag in your wordpress website. By following the steps you can either make robots tag noindex or index based on your preferences.

This video includes:
What is meta robots tag no index? – 1:07
Excluded by ‘noindex’ tag issue flagged in Google Search Console 1:25
How to fix the issue & why & when – 1:40
How to edit your meta robots for bulk number of URLs in your wordpress? – 3:07
How to edit meta robots tag for specific URL in wordpress? – 6:02
what is noindex & how to edit – 06:45
what is nofollow & how to edit – 06:50
what is noarchive & how to edit – 06:48
what is noimageindex & how to edit – 7:05
what is nosnippet & how to edit – 7:15

For any questions please let me know in the comment section.

More Interesting Videos on PageSpeed:
How to optimize speed without breaking website design?

Third-Party Impacts on Google PageSpeed Insights Score

1) PageSpeed Optimization Playlist from DeveloperMindedSEO Channel

2) Google Search Console Series –

3) Google Tag Manager –

4) Inbound Marketing –

5) Mobile SEO –

6) Learn with me Google Ads –

7) WordPress Basics Tutorial –

*More From Me*
My website:
Facebook Group:
#noindex #metarobots #googlesearchconsole

Best Blogger SEO settings robots.txt, robots header tag new meta tags Improver SERP

copy codes from here:

Download Oyedad Theme:

अपनी वेबसाइट की रैंकिंग अच्छी कीजिये कुछ कोड की सहायता से. robots.txt फाइल के बारे में जानिये की क्या होती है कैसे इसका इस्तेमाल करना है
अपनी ब्लॉगर की वेबसाइट की अच्छी रैंकिंग कीजिये ये विडियोआपके लिए ही है.

Boost Your Google Search Engine Ranking by improving robot header tag, robots.txt file, and implementing the new meta tag released by Google.
These codes are exclusively available with us only—the latest 2020 Blogger Fully Responsive HTML5 full SEO friendly Theme for blogger download.
Understand what the robots.txt file and its use is.
blogger robots.txt SEO file, header tag file setting, generate robots.txt file for blogger Blogspot for best SEO practice.

#bloggertips #SEO #blogspot

Day #66: What is a Robots Meta Tag

In today’s video I talk about what a robots meta tag is and its SEO purpose. To learn more SEO tips, you can subscribe at To view this post on the site, you can go to

Robots Meta Tags | Lesson 9/34 | Semrush Academy

You’ll gain an understanding of search crawlers and how to optimally budget for them.
Watch the full course for free:

0:08 Robots Meta Tag
1:04 Noindex
1:09 Nofollow
1:22 Having multiple meta tags
3:25 Notranslate
3:32 Summary

✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹
You might find it useful:
Tune up your website’s internal linking with the Site Audit tool:
Understand how Google bots interact with your website by using the Log File Analyzer:

Learn how to use SEMrush Site Audit in our free course:
✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹

A robots meta tag is a detailed individual site-specific approach to determine how a particular site should be indexed and presented to users in search results. Usually, it goes into the “head” section of your site but it can also be applied using HTTP server headers.

The robots meta tag can either be applied using a global approach, which would mean you’d serve one directive that would be valid for all crawlers – or you’d take a more granular approach and specify a meta robots tag directive, which would only be valid for Bingbot say – but not for Googlebot.

The most commonly used directive is noindex, which essentially means: “Dear search engine, please do not display this URL in search results”.

It is also possible to combine directives, e.g. noindex and nofollow. Noindex means again that this URL will not show up in search results, nofollow means that search engines are not supposed to pass any link equity to any of the links going out from this specific URL. Keep in mind though that Google will still crawl those outgoing links.

Having multiple robots meta tags is also possible. You can therefore have different directives for different user-agents. This could be helpful if you want to control Googlebot-news and its indexation behaviour differently from Googlebot for regular web or smartphone results.

From a practical standpoint, the robots meta tag is almost always the better choice for day-to-day usage, as it can be used in a far more precise way at a per URL level. Also, the robots meta tag does not cause a loss of external linking power, because for URLs blocked by robots.txt in contrast – the link juice would essentially be lost and therefore not passed on. So, robots meta tags do not cause a break in internal and external linking. Generally speaking a proper internal link juice distribution is very hard to get right if lots of pages or even folders are blocked in robots.txt. Ultimately, the big benefit of the meta tag is however, that it reduces the amount of indexed pages to only relevant URLs.

From a more practical standpoint you would use a noindex, especially for URLs with a minimum amount of content on them, or for direct duplicates or just low value & low quality entry pages that cause a bad user experience: these could be internal search results or category pages with very few items on them, duplicated content (e.g. with the print and regular version of an article). Overall, we’re talking low value pages that shouldn’t serve as an entry point for your users in search results.

There are also some less commonly known values, e.g. noarchive or nosnippet that prevent a snippet for this URL from showing up in search results. It is not very useful from a practical point of view though – because for regular websites you always want a snippet for your URL. You can also specify things like notranslate which means for Google not to offer a translation of this site in search results.

In Summary: The two most commonly used directives are noindex and nofollow. Noindex is actually the only thing you really need though because using internal nofollow often causes more problems than it actually resolves. And if you do not want anything to happen or say to restrict crawling, you don’t need to include a robots meta tag at all. If there are robots meta tag directives present, Google will just treat them as index, so don’t waste your time and resources on implementing it at all.

Just in case, to make you aware of pages with valuable content mistakenly blocked by the noindex directive, the SEMrush Site Audit offers you the appropriate check, which we recommend using.

#TechnicalSEO #TechnicalSEOcourse #MetaRobots #SEMrushAcademy