Quote:
Originally Posted by FightThisPatent
its been theorized that google is looking at the sentences on a page and comparing it to the cloud database, to see if the same sentences/content is being used.. if so, it gives a negative number penalty value when calculating page rank....
so copying wikipedia articles on your sites, thinking it will help get better results, is nulled by this concept.
on a different note.. thickcash affiliate manager should be using the new T3 Affiliate Analysis tool, to help their affiliates maximize traffic to the program
hit me up on icq to see a demo..
Fight the interjection!
|
stop messing up my fun.
there are a lot of myths surrounding this. and simple word replacement schemes are easily detected regardless and is no different than republishing the exact same feed.
what i'm curious about is how do people arrive at the conclusion that a very old spamming technique that's been around since the beginning, is now new and innovative technology.
even more interesting is how it was determined that google does not/can not detect simple word replacement schemes from dictionary databases across what could potentially be 1000's of domains?
;)
FIGHT THE SPAMMER!