Looking at bad information in SEO spam email


There are numerous people who send out emails that contain myths or old information about SEO. It is very Ironic when they send these to Google, post them as comments in wordpress.com site, ect.

Hello Web Admin, I noticed that your On-Page SEO is is missing a few factors, for one you do not use all three H tags in your post, also I notice that you are not using bold or italics properly in your SEO optimization.

First problem is that wordpress.com sites use a theme that can not be changed unless one makes special arrangements with wordpress.com to host their site and not use wordpress.com site. Apparently whurosp failed to notice where he was posting.

Second problem,  bold and italics have been replaced by CSS rules and are deprecated or no longer validate as they are obsolete. Ironically search engines have played a major role in HTML5. They know how to parse CSS to determine the size of the characters and use size not the tags to determine importance of text. The three H tags are part of HTML4 outline specifications, W3C has place that specification on a list to likely to be dropped. H tags and bold tags were important until CSS2 was widely adopted. The proper use of bold and italics tags today is not to use them and wordpress doesn’t.

On-Page SEO means more now than ever since the new Google update: Panda. No longer are backlinks and simply pinging or sending out a RSS feed the key to getting Google PageRank or Alexa Rankings, You now NEED On-Page SEO.

The panda update back in 2010, never replaced backlinks, it removed sites with low quality content from the index. Such as pages which are filled with affiliate links and advertisements and pages which contain only a subset of information about a subject that is available on a million other pages. The history of panda was to have a group of human reviewers look at sites and judge them on criteria such as would you purchase something from this site?

Without going into more details of Panda lets just say this statement is completely backwards. The solution if one is having problems with panda is to get links from authority sites and social media to send a signal that the site has quality content. It is ironic that whurosp is attempting to do what he sayes no longer works. However; Not panda but penguin focuses on low quality links.

So what is good On-Page SEO? First your keyword must appear in the title. Then it must appear in the URL. You have to optimize your keyword and make sure that it has a nice keyword density of 3-5% in your article with relevant LSI (Latent Semantic Indexing).

Ebay.com, Wikepedia.org, etc, etc; would not appear in search results if the keyword had to appear in the URL. 99% of spammy sites have the keyword in the URL and do not appear. It is generally accepted that having a keyword in the URL is not a bad thing, but if one is excepting magic because the keyword is in the URL they will likely find themselves in the 99%. The only time the keyword must appear in the URL is when inurl: is used as part of the search.

Actually good on-page SEO matches up fairly well with good quality content. And, as search engines improve their discernment of good quality content it will only do better. Normally it is a good idea to have the title be related to the content, using a keyword in the title helps search engine users to know what the content is about and most results have a keyword in the title.

However, on closer inspection, one can easily see that google uses its knowledge graph to produce the search results. If one googles puppies often puppies is not used in the title and puppy is highlighted by Google as the word google used to determine it is relevant.

Likewise the Google’s Knowledge Graph obsoletes keyword density, which BTW was never 3-5%, Doing a google search and looking at the keyword density of the results proves that statement wrong in almost every search.

Then you should spread all H1,H2,H3 tags in your article. Your Keyword should appear in your first paragraph and in the last sentence of the page.

Wrong there is no magic placement of keywords. Panda prefers that words related to the search appear above the fold, that is to say people should not need to scroll down to determine that they are on a page related to their search. Beyond this all words should be in natural English or the language used on the page.  Google does have in the advanced search a reading level … Content should be at the reading level of the demographic that does the search. SEO copy is a bit of an artform to include words relevant to the search, and having it be good marketing copy … when done well it is like poetry.

There should be one internal link to a page on your blog and you should have one image with an alt tag that has your keyword. ….wait there’s even more Now what if i told you there was a simple WordPress plugin that does all the On-Page SEO, and automatically for you?

One internal link, LOL, how are visitors going to navigate, with one internal link, they are not. This is a recipe to drive away visitors. Internal links should go from pages where the demographic or content is such that people are likely to follow them. There should be at least one link pointing to a every page or nobody including search engines will find the page.

The alt tag is for the visually impaired. Its a good idea to have an alt tag on ones logos that tells the visually impaired user that it links to your company name, etc. It is used by search engines but it is not used any more than typing the word out anywhere on the page.

I’ve already pointed out that wordpress.com does not allow sites to change themes or add custom pluggins. Regardless if they are useful or a joke.

Whoever is paying whurosp to spam wordpress.com sites is wasting their money.

XMLHttpRequest() with onreadystatechange function (Async=true)


As mentioned in Pre XMLHttpRequest AJAX the XMLHttpRequest object was initially implemented by Microsoft in 1999 as an ActiveX object in Internet Explorer. However it was adopted as a standard javascript browser object. HTML5 added a onload event to this object. Before HTML5 and with IE8 today, the onreadystatechange event must have a listener to determine when the data has been loaded.

Data received by XMLHttpRequest() can be of any type; XML, HTML, and text are normally used. Binary data can be downloaded but in straight javascript binary data is more complicated.

XMLHttpRequest() was created for getting data from the same domain that the web page was downloaded from. Cross domain data sharing has security issues; It can be done but the host which is sharing the data must have special headers in the HTTP response to allow it.

<script>
var xmlHttp = new XMLHttpRequest(); /* current usage */
function getReady() {
/* this event fires more than once. .readyState will change 4 times.
 0 = uninitialized
 1 = loading
 2 = loaded
 3 = interactive
 4 = complete
*/
 if (xmlHttp.readyState == 4) 
 document.getElementById("answer").value = xmlHttp.responseText;
 }
}
function getAnswer(method) {
 var d = new Date();
 var n = d.getTime(); /* pre emptive prevent get request caching */
 var a = document.getElementById("a").value;
 var b = document.getElementById("b").value;
 xmlHttp.open(
    "GET", 
    "/domath2.php?a="+a+"&b="+b+"&n"+ Math.random(),
    true); 
 /* The last value means Async=true 
    so we need to listen to onreadystatechange */
 xmlHttp.onreadystatechange = getReady;
 xmlHttp.send(/* post data goes here */); 
}
</script>
<br/><input type="text" id="a">
<br/><input type="text" id="b">
<br/><input type="button" value="Add" onclick="getAnswer('add');">
<br/><input type="text" id="answer">
<span id="collection"></span>

current usage of XMLHttpRequest() does not use an ActiveX object; In this example a global xmlHttp is being created to allow any function to interact with the object instead of using closure type functions.

The HTML calls getAnswer function which sets up and sends the request to the server. Because this example is a get request a random number is being added to the end of the request to prevent browser caching.

<?php 
$a = $_GET["a"];
$b = $_GET["b"];
echo $a+$b;
?>

The onreadystatechange event fires for each condition of the request. The finial condition is that the data from the PHP has completely loaded. In this case the result is a number in text format.

In this example we are still communicating with the server via get requests; The security concerns of get requests are that the data sent to the server appears in the log files. This will be addressed as we look at using jquery.ajax.

Pre XMLHttpRequest AJAX


The XMLHttpRequest object was initially implemented by Microsoft in 1999 as an ActiveX object in Internet Explorer. Methods to communicate with a web server from a web page prior to json may be called by some old school; However,  jsonp or padded json used today for cross site data sharing is very similar to old school methods.

The object here is to send data to the web server and get data that will be used on the webpage.  The data may come from a MySQL database but for this example, to keep it simple, the server will only be adding two numbers in PHP, then sending the response back as padded data. JSONP responses are padded data, but the data is complicated.

<br/><input type="text" id="a">
<br/><input type="text" id="b">
<br/><input type="button" value="Add" onclick="getAnswer('add');">
<br/><input type="text" id="answer">
<span id="collection"></span>

<script>
function callbackfunction(answer) {
 document.getElementById("answer").value = answer;
}
function getAnswer(method) {
 var d = new Date();
 var n = d.getTime(); /* pre emptive prevent get request caching */
 var a = document.getElementById("a").value;
 var b = document.getElementById("b").value;
 var caller = document.createElement("script");
 caller.setAttribute( 'src', 
"domath.php?callback=callbackfunction&a="+a+"&b="+b+"&nocache="+n );
 document.getElementById("collection").appendChild( caller ); 
 /* script will not load until it is added to the document */
 /* appendChild is supported in browsers newer than IE 5.5 */
}
</script>

A script element is being created to be added to the document and the src is being set to include the form data as a standard get request. Because get requests often get cached a unique nocache value, current time in milliseconds, is added to the request to prevent caching. callback= is added to the URL, which is a standard practice for JSONP.

The PHP “domath.php” does the math and returns it as a padded function call.

<?php
$callback = $_GET["callback"];
$a = $_GET["a"];
$b = $_GET["b"];
$result = $a+$b;
echo $callback."(".$result.");";
?>

On the web page the script is neither downloaded or ran until it is added to the webpage. The browser sees the following script.

callbackfunction(3);

A function call with an argument to a function, which exists in the webpage, and runs that function when the script has downloaded. The function call provides the equivalent of a onload event.

It should be noted that while you can wrap data in a script tag, which is not script or padded, you can not download a script which is not a runnable script. Data wrapped in a script tag must be part of the HTML document. More on that in a later post. Using downloaded scripts to communicate between the client browser and host computer must be padded either as a function call or as a var assignment. Function call padding provides the onload event call which is missing from pre HTML5 browsers, IE8, for the script element.

There are two security concerns about JSONP called via a get method. One is that the data being sent to the server is exposed in log files, and the second is the location of the shared data must be trust worthy. Post method requires using XMLHttpRequest, which will be covered in another post.

White hat link development today and tomorrow.


The rules for link development has evolved over the last ten years considerably. However there has been almost no changes to what I call white hat link development. Both Google and Bing today have link dis-vol tools where one can submit the links that point to the webmasters site to tell the search engines these are links which the site owner no longer wishes to be associated with in hopes to remove a penalty that a site may have because of unnatural links. These tools went up around the time of penguin algo roll outs.

How we got to today

To understand the differences lets go way back in time to the infancy of Google. Links pointed from one site to another as recommendations. These recommendations mirrored the type of recommendations that if one where to walk into a counter of say a store that had greeting cards and ask the question, “Where can I go to print my own card which i made on my computer?” The clerk would make a goodwill recommendation to another establishment that had that service.

By counting the number of links pointing to a site or web page across the internet a fairly good index could be made, which was very useful with a database application as a method to search the internet. Infoseek was based primarily on counting links in creating its core database, their database application then produced answers to quires by going vertically through the core database and pulling up the pages with the content, which matched the search, adding to the content the words that where in the links pointing towards the site.

Google improved on the method of using external factors to rank sites by saying that a link from a popular site was worth more than a link from a site which was not as important. Popular was determined by the number of links pointing towards a site.

Let me digress, a search engine which always shows the same set of sites has a problem because people once they have seen all those sites will stop using that search engine. Hence there is a need for other algos to provide new content, which has recently become popular, trendy, viral, or fresh. The newest of these algos is named hummingbird, which will effect 9 out of every 10 searches. While external factors play a part in these algos these are outside of the scope of this post.

With the rule being who has the most links pointing towards their site many marketers began aggressively to accumulate links from anybody and everybody. Including their own sites, as the patents suggested these links counted as well as the links on all the pages of the site they wanted to market. Some people purchase tens of thousands of domains, which could be used to link to their sites. Search engines soon began referring to sites with using unnatural methods as search engine spam, setup guidelines of what was acceptable, and a distinction was made between white hat and black hat SEO practices. Those with tens of thousands of single page sites were named thin content and ignored. Those with link farm schemes where ten thousand sites all added the same set of links to their site to get a link from the collective of ten thousand sites were also ignored as soon as they became big enough to appear above the radar.

There are tens of thousands of forums online, almost all allow a link from the profile page and many allow signature links. They too have been used aggressively by marketers. Drive by blog comments have been so bad that most blogs filter comments and read them before they become published.

Mega menus, site maps on every page, today are ignored, Navigation content is examined and site wide links to other sites are discounted. Note navigation content should be white hat and point to the important landing pages on a site where people can begin browsing.

 

Today we have Penguin 2.1 tomorrow Orca 1.0

White hat link development that mirrors the situation of a question being asked to the clerk at an establishment has not been harmed. And, is part of what needs to be done to recover a site out of a history of black hat link development.

Penguin has been refined to determine which of the tens of thousands of forums are quality forums. Many sites have expertise and they share it in forums, search engines want to list experts but not aggressive drive by marketers who just create a profile and say hello then leave.

White hat link development strategy, which is desirable, has been and remains a good practice. The question white hat link developers ask is if i had a stack of business cards where would be the best place to put them in hopes of bringing in people who would be interested in the product or service. The people most likely to turn into customers are those who follow white hat links, they are in fact better than search engines for promotion as one qualified lead is worth more than a hundred unqualified visitors.

Forget search engines while doing link development and the links are natural links which have a value in and of themselves. Yes it is harder, instead of filling out a form on a forum a discussion with a prospective partner needs to take place. Some of these partners may not have a popular site in the eyes of search engines today, which is natural. In the long run when the next algo comes out, maybe called Orca the killer whale, sites with natural goodwill links grow.

Online Documentation for HTML5, CSS3, etc, etc.


Some people may tend to believe that web professionals recall every detail about the numerous technical specifications used online. Others may realize that the pastor in the local church does not recall be memory every scripture in the bible. The trick is to know what the information is and where the information is. 

Heronote at https://code.google.com/p/heronote/ has about 150 ebooks and windows help files “chm” online. His work is a “must” for the library of anybody doing web development.

IE8 falls below 10% usage world wide.


The Countdown

HTML5 Video has not been able to show its potential because of the large numbers still using IE8. Yes there are solutions to show a video by embeding it in Flash for IE8; But, HTML5 video can go beyond just showing a video.

Exposing controls to a video to script on the page allows the video to be interactive with other content on the page or interactive outside the video box. Site menus for example can turn on or switch video content based on where the user is focused via looking at scroll bars or where the mouse is on the page.

One a related note Netflix drops silverlight and moves to HTML5

The major objection to HTML5 video is protecting or preventing the video from becoming copied. However that is being solved.

Editing a scene in a video with avisynth plus.


As pointed out in an early post Avisynth can read a video file and present it to other tools for editing. To pull out a single scene the function is trim … Use as follows …

AVIsource("example.avi").trim(150,300)

That script will produce frames 150 to 300 from example.avi … use any other video program to edit the clipped scene or create a new one. to combine the video with the clip using the original video sound track. use the following …

v = AVIsource("example.avi").trim(0,150) 
  \+ AVIsource("newscene.avi") 
  \+ AVIsource("example.avi").trim(300,9999) 
a = AVIsource("example.avi")
AudioDubex(v,a)

This is a more than single line script.

On the first line we assemble the video as a variable v.  we want only replace frames 150 through 300. So we get the original begining. Then add the replaced Scene. And finally add the end from our original clip.

The second line pulls out the audio from the original clip.

The last line combines the video and audio back into a video output.  I am assuming the replacement scene is 150 frames, if it were not the audio would go out of sync with this script by the number of frames that is different. That of course can be corrected in the script with more lines of code. Or the audio could be included with the replaced scene.

Having the audio be part of the replaced scene would make the script be one line of adding three sources.

AVIsource("example.avi").trim(0,150) 
  \+ AVIsource("newscene.avi") 
  \+ AVIsource("example.avi").trim(300,9999)

No need to use variables in that case and the length of the replacement scene can be any length.

Note \ allows you to use the next line to continue to format the code for readability.

 

Note the two following scripts are produce the same results.

AVIsource(“example.avi”)

and

videovariable=AVIsourc(“example.avi”)

videovariable

What gets supplied or returned by AVIsynth is the last or final video.

Video created for Las Rocas resort


Using AVIsynth as a source for other video production tools.


I am one of those people who tends to ask the question, “how can it be made better?” More often than not the answer is by using more than one program to produce a binary product. Debugmode’s Wax 2.0 for example interfaces with their winmorph application and provides what people may call photoshop morphing to video objects. Winmorph can also create video clips from two photo graphics morphing from one image to another … the CG effect is commonly used to go from one person’s face to another; However, winmorph is not limited to faces. Wax 2.0 itself has native perspective filter to place a video on a flat surface within another video, such as a TV screen in the video.

Winmorph can be used with any two images to create a video of an object moving or morphing; Or, use a single image and move say ocean surf, flags, or trees of a static image and change it into a moving image. Fake, (or create the effect using CG), of a moving camera by morphing 3d objects to create the illusion that a series of flat frames is a 3d object because the texture of the flat image is moving … note the illusion of a rotating image can be better understood if one considered the effect on the brain of a shadow of a rotating object. The shadow appears to rotate in one direction and then rotate in the other direction but when texture is added the movement of the texture completes the illusion that a actual object is rotating and tells the brain which direction it is rotating in.

The difference between a video of a rotating object created by cg and a camera is where the illusion is created – both create the illusion on the screen. A video is nothing more than a series of static images.

AVIsynth has a lot of powerful video features. I am not discounting it as a stand alone system but am looking to produce binary products with more features than are available with any one program, and quicker production times. Wax reads uncompressed video, which is best for production, but sometimes video is available in a compressed format like flv and the clip being produced does not exceed the quality of the flv. In that case video may need to be converted.

AVIsynth converts video in real time to other formats needed by other programs. It even works good for the minor task of using windows media player to play flv files. Just open the example.avs script below in windows media player.

Say for example there is a green screen behind a subject and the video is available as flv. Wax has a good green screen ability and perspective so that it is able to fill in that green screen with any video.

The AVIsynth script to use and convert a flv files is one line of code. Place this code in example.avs

DirectShowSource(“example.flv”)

Open Wax and use example.avs, as the video source, add the video you want to use for the green screen, apply the prospective to the video for the green screen and apply the green screen filter.

PHP Development Web Server.


As of version 5.4.0 PHP provides a built-in web server invoked via a command line switch. The command line switch can be used in a desktop shortcut by adding it to the target property “C:\PHP\php.exe -S Localhost:80″. URI requests are served from the current working directory or the start in directory “C:\xampp\htdocs\seobydesign” another property of the shortcut, (you can leave the start in empty if you don’t mind having the shortcut in the same directory as the http documents; very useful if you set it up to run off a flash drive.)

Since PHP does not require any actual installation, just be unzipping the current archive and building a shortcut the web server is fully ready to go.

I would not except any major mentions of the web server on most development blogs. Most developers have versions of Apache running as development systems, and most servers on the internet are Apache based. Apache has many configuration options, as well as htaccess configuration, and can handle a large number of connections simultaneously. The built in PHP web server on the other hand takes very little system resources. On the fly configuration can be done by using a php script referred to in the documentation as a “router” script.

Out of the box the built in web server handles php scripts and static html and web assets. No mySQL server is provided – although mySQL servers can be installed independently. Around 25% of websites use some sort of CMS system, and 4 out of 5 of them use WordPress. WordPress uses htaccess, modredirects, and mySQL.

Still the ability to have a localhost web server up and running in less than five minutes with PHP applications makes this server very interesting. I assume the php and the html could be loaded from a CD or a memory stick.

Follow

Get every new post delivered to your Inbox.