Friday, September 23, 2011

Featuring the Screaming Frog SEO Spider

Screaming Frog SEO Spider must be a cool crawling tool you can use aside from the SEOmoz Crawl Test Tool and Raven Internet Marketing Tool. In the first place, this tool provides great metrics and it is absolutely free.

Screaming Frog SEO Spider is a desktop program that can be installed on the computer to spider website's links, images, CSS, script and apps from SEO perspective. Function-wise, this tool consolidates key on-site pages elements for search engine optimization and presents these in tabs by type, allowing you to easily filter common SEO issues. An important feature of this tool is that its generated reports can be exported into Excel file. This way, it is easier to view, analyze, and filter crawl data.

This SEO tool helps make site analysis a lot easier. Why not? This allows the SEO specialist to analyze medium to large sites that would have been labor-intensive and time-consuming if done manually. This tool ensures that the specialist have clear view of every part and aspect of the website - that way, the SEO professional do not miss any redirect, meta refresh pt duplicate page issue.

According to Screaming Frog, the tool provides the the following data:

    * Errors – Client & server errors (No responses, 4XX, 5XX)
    * Redirects – (3XX, permanent or temporary)
    * External Links – All followed links and their subsequent status codes
    * URI Issues – Non ASCII characters, underscores, uppercase characters, dynamic uris, long over 115 characters
    * Duplicate Pages – Hash value / MD5checksums lookup for pages with duplicate content
    * Page Title – Missing, duplicate, over 70 characters, same as h1, multiple
    * Meta Description – Missing, duplicate, over 156 characters, multiple
    * Meta Keywords – Mainly for reference as it’s only (barely) used by Yahoo.
    * H1 – Missing, duplicate, over 70 characters, multiple
    * H2 – Missing, duplicate, over 70 characters, multiple
    * Meta Robots – Index, noindex, follow, nofollow, noarchive, nosnippet, noodp, noydir etc
    * Meta Refresh – Including target page and time delay
    * Canonical link element & Canonical HTTP headers
    * X-Robots-Tag
    * File Size
    * Page Depth Level
    * Inlinks – All pages linking to a URI
    * Outlinks – All pages a URI links out to
    * Anchor Text – All link text. Alt text from images with links
    * Follow & Nofollow – At link level (true/false)
    * Images – All URIs with the image link & all images from a given page. Images over 100kb, missing alt text, alt text over 100 characters
    * User-Agent Switcher – Crawl as Googlebot, Bingbot, or Yahoo! Slurp
    * Custom Source Code Search – The spider allows you to find anything you want in the source code of a website! Whether that’s analytics code, specific text, or code etc. (Please note – This is not a data extraction or scraping feature yet.)

No comments:

Post a Comment