WordPress is a wonderful system they do a blog the way a blog should be done. I would however not use it exclusively for a complete web site or exclusively as a site for SEO.
WordPress advantage over hand coding is that wordpress is easier and faster. Hand coding advantage is that it is nimbler. When it comes to SEO nimble is important. Sculpturing page rank (or should I say trying to change the sculpture) in wordpress would be tortuous.
The concept of sculpturing page rank is normally only addressed by Google when the internal links are incorrectly sculptured: … Google: Why would an FAQ page rank above a site’s homepage? “I would make sure whatever page you want to rank on … add a link to that page” …
By hand coding I can modify the internal page link structures to make sure the pages “I want to rank on” have links pointing to them from the pages that contain page rank on the site.
It is unnatural in wordpress to do this. WordPress page rank focus is the front page and the newest 10 pages – unless a article goes viral or becomes part of a link fest. I could create new posts that relate to the page I need more links to and link to them … but if I were to do to much of this I would drive away people who expect a blog to operate like a blog … when I drive them away I also drive away those who would link to the blog. WordPress does new content very well.
Drupal is a wonderful system like wordpress it is quick and easy to use, has lots of modules which include things like shopping carts – Drupal seven allows modules to be installed without even the need to ftp them to the site! Just supply Drupal the link to the zip module and the transfer will take place across the high speed internet connections of the servers.
With drupal I can sculp links. The con I have is even debatable – with static pages the apache system provides a last modified date. The first request for a page – When a search engine spider visits is to get this date / http information, (based on examination of the log files). It will then request another page if the date has not changed, but if it has changed will get the updated page. With Lastdate the spider will go though a list until it finds one that has changed and updates that page.
I have written my own web spider with the same behavior – this behavior specifically because I want to index pages which are “not” already known or have “changed” so I can update them.
The head request to a drupal page always looks something like:
Date: Sat, 10 Jul 2010 17:06:20 GMT Server: Apache X-Powered-By: PHP/5.2.9 X-Oneclick-Backend: oneclick2 Expires: Sun, 19 Nov 1978 05:00:00 GMT Cache-Control: store, no-cache, must-revalidate, post-check=0, pre-check=0 Last-Modified: Sat, 10 Jul 2010 17:06:20 GMT Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 3212 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8
The Last-Modified: Sat, 10 Jul 2010 17:06:20 GMT is always the current time. If the page has not updated and search engines update it this is a wasted opportunity to add new content.
My conclusion is in both cases static content should still exist on the sites when and if the sites are to become “large” … if we are talking less than a couple hundred pages it is not an issue.
To dig up some an old post which is still relevant but not within 2 clicks … Making Dynamic Content Search Spider Friendly … It is good stuff.