Page 1 of 2 12 LastLast
Results 1 to 10 of 16

Thread: Robots.txt?

  1. #1
    dmi's Avatar
    dmi
    dmi is offline Net Builder
    Join Date
    Mar 2009
    Location
    N43°54′, E017°40′
    Posts
    242
    Thanks
    141
    Thanked 56 Times in 28 Posts

    Robots.txt?

    How important is it to have a robots.txt file installed?

    Mine currently looks like this:

    Code:
    Sitemap: http://www.mysite.com/sitemap.xml.gz
    
    User-agent: *
    Disallow:
    What do you think about blocking /privacy-policy/ and other copy-paste == duplicate pages?

  2. #2
    Will.Spencer's Avatar
    Will.Spencer is offline Retired
    Join Date
    Dec 2008
    Posts
    5,033
    Blog Entries
    1
    Thanks
    1,010
    Thanked 2,327 Times in 1,258 Posts
    Quote Originally Posted by dmi View Post
    How important is it to have a robots.txt file installed?
    I like to use robots.txt as one tool to discourage people who try to download entire websites.

    I also use it to keep the search engines from wasting my bandwidth by downloading pages I really don't care about.

    Code:
    Sitemap: http://www.mysite.com/sitemap.xml.gz
    User-agent: *
    Disallow:
    I'm pretty tired this morning, but it looks like that will block all search engines from all pages? It probably works fine, but I might edit it to be more readable.

    Quote Originally Posted by dmi View Post
    What do you think about blocking /privacy-policy/ and other copy-paste == duplicate pages?
    From a pure SEO perspective, using follow and noindex in the header of those pages works better.
    Submit Your Webmaster Related Sites to the NB Directory
    I swear, by my life and my love of it, that I will never live for the sake of another man, nor ask another man to live for mine.

  3. #3
    designknick is offline Newbie Net Builder
    Join Date
    Jul 2009
    Posts
    28
    Thanks
    0
    Thanked 2 Times in 2 Posts
    If you aren't so concerned with the search engines indexing your pages via natural methods, then it's absolutely fine. I agree that those bots can waste valuable bandwidth, especially if you frequently edit/update your site.

    Also, where SEO is concerned, just as the bots can ignore the robots.txt file sometimes, they can also ignore the header information that you supply. You should also be careful if you have submitted your site to any directories (highly reputed with strong PR) because Google, in particular, will substitute the information they supply for you site for that which is actually listed.

    So, just be careful. Monitor everything doing it one way, and if you need to change it, then do so.

    Don't fix it though if it ain't broke!

  4. #4
    TopDogger's Avatar
    TopDogger is offline Über Hund
    Join Date
    Jan 2009
    Location
    Hellfire, AZ
    Posts
    2,946
    Thanks
    341
    Thanked 883 Times in 671 Posts
    Will, I know that you probably just copied the example, but the example that you use is technically incorrect.

    Code:
    Sitemap: http://www.mysite.com/sitemap.xml.gz
    User-agent: *
    Disallow:
    There must be a blank line between directives. In that example, the directive starts with User-agent. A blank line denotes the end of a record with the robots.txt file. The original example is correct.

    You can leave the Disallow statement blank. That means that it disallows nothing.

    A lot of people make serious mistakes with the robots.txt file because they think it is read one line at a time. If you read the specifications, every group of statements is separated by a blank line which denotes the end of a record.

    http://www.robotstxt.org/robotstxt.html

    When site owners do something like this:

    Code:
    User-agent: *
    Disallow: /images/
    
    Disallow: /test1/
    Disallow: /test1/
    Disallow: /test3/
    The spiders never see the disallow statements for the test directories if they are following the specifications because that is part of a separate record with no User-agent declaration.

    I am not sure about what happens when multiple directives are combined into a single record. For example, if someone does not use any blank lines between directives.

    Because I use the same Privacy Policy and Terms of Use pages on multiple sites, I block them with a noindex,follow meta tag. From my experience, meta tags are more effective than the robots.txt file. Spiders only periodically scan the robots.txt file, but they read the meta tags each time they index a page. If you want to block duplicate pages, it is a good idea to use both.
    "Democracy is two wolves and a lamb voting on what to have for lunch. Liberty is a well-armed lamb contesting the vote." -- Benjamin Franklin


  5. Thanked by:

    Nick (19 July, 2009), Will.Spencer (20 July, 2009)

  6. #5
    hendricius's Avatar
    hendricius is offline The Interwebs are mine!
    Join Date
    Jul 2009
    Location
    Hamburg, Germany
    Posts
    527
    Blog Entries
    1
    Thanks
    21
    Thanked 50 Times in 37 Posts
    Most of the people just use robots text just for the sake of keeping search engines from wasting their bandwidth I think. But still I have many doubts with this..What s the relation between robots text and privacy of the site? how to restrict them in some places if need is there...lol..am not sure that much about this.

  7. #6
    Will.Spencer's Avatar
    Will.Spencer is offline Retired
    Join Date
    Dec 2008
    Posts
    5,033
    Blog Entries
    1
    Thanks
    1,010
    Thanked 2,327 Times in 1,258 Posts
    Quote Originally Posted by hendricius View Post
    What s the relation between robots text and privacy of the site? how to restrict them in some places if need is there...lol..am not sure that much about this.
    robots.txt compliance is voluntary, so it is useless for protecting privacy.

    For privacy, use proper usernames and passwords.
    Submit Your Webmaster Related Sites to the NB Directory
    I swear, by my life and my love of it, that I will never live for the sake of another man, nor ask another man to live for mine.

  8. #7
    dmi's Avatar
    dmi
    dmi is offline Net Builder
    Join Date
    Mar 2009
    Location
    N43°54′, E017°40′
    Posts
    242
    Thanks
    141
    Thanked 56 Times in 28 Posts
    Quote Originally Posted by designknick View Post
    Because I use the same Privacy Policy and Terms of Use pages on multiple sites, I block them with a noindex,follow meta tag. From my experience, meta tags are more effective than the robots.txt file. Spiders only periodically scan the robots.txt file, but they read the meta tags each time they index a page. If you want to block duplicate pages, it is a good idea to use both.
    Where do you block them with a noindex,follow? On your homepage or in the privacy policy itself (its header) ?

    Anyway, my robots.txt is read fine by GWT. It shows no errors. The first line is the sitemap. Then blank line. Then the next two lines actually disallow nothing.

    I guess it's fine.

  9. #8
    dmi's Avatar
    dmi
    dmi is offline Net Builder
    Join Date
    Mar 2009
    Location
    N43°54′, E017°40′
    Posts
    242
    Thanks
    141
    Thanked 56 Times in 28 Posts
    This is how my robots.txt looks like now:

    Code:
    Sitemap: http://www.mysite.com/sitemap.xml.gz
    
    User-agent: *
    Disallow: /contact/
    Disallow: /privacy-policy/
    Disallow: /terms/
    Disallow: /feed/
    Disallow: /wp-
    Disallow: /cgi-bin/
    
    User-agent: Googlebot-Image
    Allow: /
    
    User-agent: Mediapartners-Google
    Allow: /
    
    User-agent: OutfoxBot/0.5
    User-agent: complex_network_group
    User-agent: Alexibot
    User-agent: Aqua_Products
    User-agent: BackDoorBot
    User-agent: BackDoorBot/1.0
    User-agent: BPImageWalker/2.0
    User-agent: Black.Hole
    User-agent: BlackWidow
    User-agent: BlowFish
    User-agent: BlowFish/1.0
    User-agent: Bookmark search tool
    User-agent: Bot mailto:craftbot@yahoo.com
    User-agent: BotALot
    User-agent: BotRightHere
    User-agent: BuiltBotTough
    User-agent: Bullseye
    User-agent: Bullseye/1.0
    User-agent: BunnySlippers
    User-agent: Cegbfeieh
    User-agent: CheeseBot
    User-agent: CherryPicker
    User-agent: CherryPickerElite/1.0
    User-agent: CherryPickerSE/1.0
    User-agent: ChinaClaw
    User-agent: Copernic
    User-agent: CopyRightCheck
    User-agent: Crescent
    User-agent: Crescent Internet ToolPak HTTP OLE Control v.1.0
    User-agent: Custo
    User-agent: DISCo
    User-agent: DISCo Pump 3.0
    User-agent: DISCo Pump 3.2
    User-agent: DISCoFinder
    User-agent: DittoSpyder
    User-agent: Download Demon
    User-agent: Download Demon/3.2.0.8
    User-agent: Download Demon/3.5.0.11
    User-agent: EirGrabber
    User-agent: EmailCollector
    User-agent: EmailSiphon
    User-agent: EmailWolf
    User-agent: EroCrawler
    User-agent: Express WebPictures
    User-agent: Express WebPictures (www.express-soft.com)
    User-agent: ExtractorPro
    User-agent: EyeNetIE
    User-agent: FairAd Client
    User-agent: Flaming AttackBot
    User-agent: FlashGet
    User-agent: FlashGet WebWasher 3.2
    User-agent: Foobot
    User-agent: FrontPage
    User-agent: FrontPage [NC,OR]
    User-agent: Gaisbot
    User-agent: GetRight
    User-agent: GetRight/2.11
    User-agent: GetRight/3.1
    User-agent: GetRight/3.2
    User-agent: GetRight/3.3
    User-agent: GetRight/3.3.3
    User-agent: GetRight/3.3.4
    User-agent: GetRight/4.0.0
    User-agent: GetRight/4.1.0
    User-agent: GetRight/4.1.1
    User-agent: GetRight/4.1.2
    User-agent: GetRight/4.2
    User-agent: GetRight/4.2b (Portuguxeas)
    User-agent: GetRight/4.2c
    User-agent: GetRight/4.3
    User-agent: GetRight/4.5
    User-agent: GetRight/4.5a
    User-agent: GetRight/4.5b
    User-agent: GetRight/4.5b1
    User-agent: GetRight/4.5b2
    User-agent: GetRight/4.5b3
    User-agent: GetRight/4.5b6
    User-agent: GetRight/4.5b7
    User-agent: GetRight/4.5c
    User-agent: GetRight/4.5d
    User-agent: GetRight/4.5e
    User-agent: GetRight/5.0beta1
    User-agent: GetRight/5.0beta2
    User-agent: GetWeb!
    User-agent: Go!Zilla
    User-agent: Go!Zilla (www.gozilla.com)
    User-agent: Go!Zilla 3.3 (www.gozilla.com)
    User-agent: Go!Zilla 3.5 (www.gozilla.com)
    User-agent: Go-Ahead-Got-It
    User-agent: GrabNet
    User-agent: Grafula
    User-agent: HMView
    User-agent: HTTrack
    User-agent: HTTrack 3.0
    User-agent: HTTrack 3.0x
    User-agent: HTTrack [NC,OR]
    User-agent: Harvest
    User-agent: Harvest/1.5
    User-agent: Image Stripper
    User-agent: Image Sucker
    User-agent: Indy Library
    User-agent: Indy Library [NC,OR]
    User-agent: InfoNaviRobot
    User-agent: InterGET
    User-agent: Internet Ninja
    User-agent: Internet Ninja 4.0
    User-agent: Internet Ninja 5.0
    User-agent: Internet Ninja 6.0
    User-agent: Iron33/1.0.2
    User-agent: JOC Web Spider
    User-agent: JennyBot
    User-agent: JetCar
    User-agent: Kenjin Spider
    User-agent: Kenjin.Spider
    User-agent: Keyword Density/0.9
    User-agent: Keyword.Density
    User-agent: LNSpiderguy
    User-agent: LeechFTP
    User-agent: LexiBot
    User-agent: LinkScan/8.1a Unix
    User-agent: LinkScan/8.1a.Unix
    User-agent: LinkWalker
    User-agent: LinkextractorPro
    User-agent: MIDown tool
    User-agent: MIIxpc
    User-agent: MIIxpc/4.2
    User-agent: MSIECrawler
    User-agent: Mass Downloader
    User-agent: Mass Downloader/2.2
    User-agent: Mata Hari
    User-agent: Mata.Hari
    User-agent: Microsoft URL Control
    User-agent: Microsoft URL Control - 5.01.4511
    User-agent: Microsoft URL Control - 6.00.8169
    User-agent: Microsoft.URL
    User-agent: Mister PiX
    User-agent: Mister PiX version.dll
    User-agent: Mister Pix II 2.01
    User-agent: Mister Pix II 2.02a
    User-agent: Mister.PiX
    User-agent: NICErsPRO
    User-agent: NPBot
    User-agent: NPbot
    User-agent: Navroad
    User-agent: NearSite
    User-agent: Net Vampire
    User-agent: Net Vampire/3.0
    User-agent: NetAnts
    User-agent: NetAnts/1.10
    User-agent: NetAnts/1.23
    User-agent: NetAnts/1.24
    User-agent: NetAnts/1.25
    User-agent: NetMechanic
    User-agent: NetSpider
    User-agent: NetZIP
    User-agent: NetZip Downloader 1.0 Win32(Nov 12 1998)
    User-agent: NetZip-Downloader/1.0.62 (Win32; Dec 7 1998)
    User-agent: NetZippy+(http:/www.innerprise.net/usp-spider.asp)
    User-agent: Octopus
    User-agent: Offline Explorer
    User-agent: Offline Explorer/1.2
    User-agent: Offline Explorer/1.4
    User-agent: Offline Explorer/1.6
    User-agent: Offline Explorer/1.7
    User-agent: Offline Explorer/1.9
    User-agent: Offline Explorer/2.0
    User-agent: Offline Explorer/2.1
    User-agent: Offline Explorer/2.3
    User-agent: Offline Explorer/2.4
    User-agent: Offline Explorer/2.5
    User-agent: Offline Navigator
    User-agent: Offline.Explorer
    User-agent: Openbot
    User-agent: Openfind
    User-agent: Openfind data gatherer
    User-agent: Oracle Ultra Search
    User-agent: PageGrabber
    User-agent: Papa Foto
    User-agent: PerMan
    User-agent: ProPowerBot/2.14
    User-agent: ProWebWalker
    User-agent: Python-urllib
    User-agent: QueryN Metasearch
    User-agent: QueryN.Metasearch
    User-agent: RMA
    User-agent: Radiation Retriever 1.1
    User-agent: ReGet
    User-agent: RealDownload
    User-agent: RealDownload/4.0.0.40
    User-agent: RealDownload/4.0.0.41
    User-agent: RealDownload/4.0.0.42
    User-agent: RepoMonkey
    User-agent: RepoMonkey Bait & Tackle/v1.01
    User-agent: SiteSnagger
    User-agent: SlySearch
    User-agent: SmartDownload
    User-agent: SmartDownload/1.2.76 (Win32; Apr 1 1999)
    User-agent: SmartDownload/1.2.77 (Win32; Aug 17 1999)
    User-agent: SmartDownload/1.2.77 (Win32; Feb 1 2000)
    User-agent: SmartDownload/1.2.77 (Win32; Jun 19 2001)
    User-agent: SpankBot
    User-agent: Sqworm/2.9.85-BETA (beta_release; 20011115-775; i686-pc-linux
    User-agent: SuperBot
    User-agent: SuperBot/3.0 (Win32)
    User-agent: SuperBot/3.1 (Win32)
    User-agent: SuperHTTP
    User-agent: SuperHTTP/1.0
    User-agent: Surfbot
    User-agent: Szukacz/1.4
    User-agent: Teleport
    User-agent: Teleport Pro
    User-agent: Teleport Pro/1.29
    User-agent: Teleport Pro/1.29.1590
    User-agent: Teleport Pro/1.29.1634
    User-agent: Teleport Pro/1.29.1718
    User-agent: Teleport Pro/1.29.1820
    User-agent: Teleport Pro/1.29.1847
    User-agent: TeleportPro
    User-agent: Telesoft
    User-agent: The Intraformant
    User-agent: The.Intraformant
    User-agent: TheNomad
    User-agent: TightTwatBot
    User-agent: Titan
    User-agent: True_Robot
    User-agent: True_Robot/1.0
    User-agent: TurnitinBot
    User-agent: TurnitinBot/1.5
    User-agent: URL Control
    User-agent: URL_Spider_Pro
    User-agent: URLy Warning
    User-agent: URLy.Warning
    User-agent: VCI
    User-agent: VCI WebViewer VCI WebViewer Win32
    User-agent: VoidEYE
    User-agent: WWW-Collector-E
    User-agent: WWWOFFLE
    User-agent: Web Image Collector
    User-agent: Web Sucker
    User-agent: Web.Image.Collector
    User-agent: WebAuto
    User-agent: WebAuto/3.40 (Win98; I)
    User-agent: WebBandit
    User-agent: WebBandit/3.50
    User-agent: WebCapture 2.0
    User-agent: WebCopier
    User-agent: WebCopier v.2.2
    User-agent: WebCopier v2.5
    User-agent: WebCopier v2.6
    User-agent: WebCopier v2.7a
    User-agent: WebCopier v2.8
    User-agent: WebCopier v3.0
    User-agent: WebCopier v3.0.1
    User-agent: WebCopier v3.2
    User-agent: WebCopier v3.2a
    User-agent: WebEMailExtrac.*
    User-agent: WebEnhancer
    User-agent: WebFetch
    User-agent: WebGo IS
    User-agent: WebLeacher
    User-agent: WebReaper
    User-agent: WebReaper [info@webreaper.net]
    User-agent: WebReaper [webreaper@otway.com]
    User-agent: WebReaper v9.1 - www.otway.com/webreaper
    User-agent: WebReaper v9.7 - www.webreaper.net
    User-agent: WebReaper v9.8 - www.webreaper.net
    User-agent: WebReaper vWebReaper v7.3 - www,otway.com/webreaper
    User-agent: WebSauger
    User-agent: WebSauger 1.20b
    User-agent: WebSauger 1.20j
    User-agent: WebSauger 1.20k
    User-agent: WebStripper
    User-agent: WebStripper/2.03
    User-agent: WebStripper/2.10
    User-agent: WebStripper/2.12
    User-agent: WebStripper/2.13
    User-agent: WebStripper/2.15
    User-agent: WebStripper/2.16
    User-agent: WebStripper/2.19
    User-agent: WebWhacker
    User-agent: WebZIP
    User-agent: WebZIP/2.75 (http:/www.spidersoft.com)
    User-agent: WebZIP/3.65 (http:/www.spidersoft.com)
    User-agent: WebZIP/3.80 (http:/www.spidersoft.com)
    User-agent: WebZIP/4.0 (http:/www.spidersoft.com)
    User-agent: WebZIP/4.1 (http:/www.spidersoft.com)
    User-agent: WebZIP/4.21
    User-agent: WebZIP/4.21 (http:/www.spidersoft.com)
    User-agent: WebZIP/5.0
    User-agent: WebZIP/5.0 (http:/www.spidersoft.com)
    User-agent: WebZIP/5.0 PR1 (http:/www.spidersoft.com)
    User-agent: WebZip
    User-agent: WebZip/4.0
    User-agent: WebmasterWorldForumBot
    User-agent: Website Quester
    User-agent: Website Quester - www.asona.org
    User-agent: Website Quester - www.esalesbiz.com/extra/
    User-agent: Website eXtractor
    User-agent: Website eXtractor (http:/www.asona.org)
    User-agent: Website.Quester
    User-agent: Webster Pro
    User-agent: Webster.Pro
    User-agent: Wget
    User-agent: Wget/1.10.2
    User-agent: Wget/1.5.2
    User-agent: Wget/1.5.3
    User-agent: Wget/1.6
    User-agent: Wget/1.7
    User-agent: Wget/1.8
    User-agent: Wget/1.8.1
    User-agent: Wget/1.8.1+cvs
    User-agent: Wget/1.8.2
    User-agent: Wget/1.9-beta
    User-agent: Widow
    User-agent: Xaldon WebSpider
    User-agent: Xaldon WebSpider 2.5.b3
    User-agent: Xenu\\\'s
    User-agent: Xenu\\\'s Link Sleuth 1.1c
    User-agent: Zeus
    User-agent: Zeus 11389 Webster Pro V2.9 Win32
    User-agent: Zeus 11652 Webster Pro V2.9 Win32
    User-agent: Zeus 18018 Webster Pro V2.9 Win32
    User-agent: Zeus 26378 Webster Pro V2.9 Win32
    User-agent: Zeus 30747 Webster Pro V2.9 Win32
    User-agent: Zeus 32297 Webster Pro V2.9 Win32
    User-agent: Zeus 39206 Webster Pro V2.9 Win32
    User-agent: Zeus 41641 Webster Pro V2.9 Win32
    User-agent: Zeus 44238 Webster Pro V2.9 Win32
    User-agent: Zeus 51070 Webster Pro V2.9 Win32
    User-agent: Zeus 51674 Webster Pro V2.9 Win32
    User-agent: Zeus 51837 Webster Pro V2.9 Win32
    User-agent: Zeus 63567 Webster Pro V2.9 Win32
    User-agent: Zeus 6694 Webster Pro V2.9 Win32
    User-agent: Zeus 71129 Webster Pro V2.9 Win32
    User-agent: Zeus 82016 Webster Pro V2.9 Win32
    User-agent: Zeus 82900 Webster Pro V2.9 Win32
    User-agent: Zeus 84842 Webster Pro V2.9 Win32
    User-agent: Zeus 90872 Webster Pro V2.9 Win32
    User-agent: Zeus 94934 Webster Pro V2.9 Win32
    User-agent: Zeus 95245 Webster Pro V2.9 Win32
    User-agent: Zeus 95351 Webster Pro V2.9 Win32
    User-agent: Zeus 97371 Webster Pro V2.9 Win32
    User-agent: Zeus Link Scout
    User-agent: asterias
    User-agent: b2w/0.1
    User-agent: cosmos
    User-agent: eCatch
    User-agent: eCatch/3.0
    User-agent: hloader
    User-agent: httplib
    User-agent: humanlinks
    User-agent: larbin
    User-agent: larbin (samualt9@bigfoot.com)
    User-agent: larbin samualt9@bigfoot.com
    User-agent: larbin_2.6.2 (kabura@sushi.com)
    User-agent: larbin_2.6.2 (larbin2.6.2@unspecified.mail)
    User-agent: larbin_2.6.2 (listonATccDOTgatechDOTedu)
    User-agent: larbin_2.6.2 (vitalbox1@hotmail.com)
    User-agent: larbin_2.6.2 kabura@sushi.com
    User-agent: larbin_2.6.2 larbin2.6.2@unspecified.mail
    User-agent: larbin_2.6.2 larbin@correa.org
    User-agent: larbin_2.6.2 listonATccDOTgatechDOTedu
    User-agent: larbin_2.6.2 vitalbox1@hotmail.com
    User-agent: libWeb/clsHTTP
    User-agent: lwp-trivial
    User-agent: lwp-trivial/1.34
    User-agent: moget
    User-agent: moget/2.1
    User-agent: pavuk
    User-agent: pcBrowser
    User-agent: psbot
    User-agent: searchpreview
    User-agent: spanner
    User-agent: suzuran
    User-agent: tAkeOut
    User-agent: toCrawl/UrlDispatcher
    User-agent: turingos
    User-agent: webfetch/2.1.0
    User-agent: wget
    Disallow: /
    Thoughts? Does anyone have an updated list of all these bloodsuckers that I'm blocking?

  10. #9
    Sundance's Avatar
    Sundance is offline Net Builder
    Join Date
    Jun 2009
    Posts
    169
    Thanks
    2
    Thanked 10 Times in 10 Posts
    I wouldn't even bother putting all those in a robots.txt most of them don't follow the robots.txt rules since its completely optional
    Xbox 720 Next Generation Console - Video gamers check it out!

  11. #10
    dmi's Avatar
    dmi
    dmi is offline Net Builder
    Join Date
    Mar 2009
    Location
    N43°54′, E017°40′
    Posts
    242
    Thanks
    141
    Thanked 56 Times in 28 Posts
    Quote Originally Posted by Sundance View Post
    I wouldn't even bother putting all those in a robots.txt most of them don't follow the robots.txt rules since its completely optional
    You are right. I already removed those. How about the rest?

Page 1 of 2 12 LastLast

Similar Threads

  1. The EPFL mini-robots
    By kiki in forum General Chat
    Replies: 1
    Last Post: 2 June, 2010, 02:24 AM
  2. Help with Robots.txt
    By 5starpix in forum Building
    Replies: 4
    Last Post: 11 February, 2010, 02:27 AM
  3. robots.txt help
    By Sami4u in forum Building
    Replies: 9
    Last Post: 27 September, 2009, 07:43 AM
  4. Block Robots and Web Downloaders with robots.txt
    By Will.Spencer in forum Managing
    Replies: 12
    Last Post: 6 June, 2009, 15:40 PM
  5. What is robots.txt file?
    By ltimranjaved in forum Managing
    Replies: 1
    Last Post: 26 May, 2009, 12:27 PM

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •