Archive for the 'Google' Category
Top 7 Reasons Why Optimizing Porn Sites is Hard
• Wednesday, January 16th, 2008
Sebastian recently fired up an experiment so I thought I’d post this up to give his experiment some link juice. While I’m at it, I’ll write a short rant about why optimizing porn sites isn’t easy.
Digg/reddit/stumbleupon are nearly useless. Of course, there are alternatives and workarounds, but really, life would be easier if digg had […]
Matt Cutts Defaces Dark SEO Team - or Does He?
• Monday, April 2nd, 2007
Yesterday, you probably noticed Matt Cutts’ blog supposedly defaced by the Dark SEO team (ya know, those guys who have all those fake TBPR pages up), as reported by Search Engine Land in a piece titled Matt Cutts gets hacked.
The Dark SEO Team has had a bit of a beef with Google’s Matt Cutts […]
How Do 6 SEOs Miss The Obvious?
• Wednesday, March 7th, 2007
I used to spend some time looking at sites people posted up on Google Group Webmaster Help as a form of recreation. I enjoyed looking at real examples instead of listening to assertions on WMW people could not back up. So when I read Search Engine Journals 6 SEOs-of-caliber write their take on techsmith.com, I […]
Long Tail De Jour: what’s the point in adding more pages to my site if google doesn’t index them or puts them into the supplemental index?
• Tuesday, January 23rd, 2007
A couple days ago, someone hit my site with this longtail:
what’s the point in adding more pages to my site if google doesn’t index them or puts them into the supplemental index
Crazy, long huh? :) When I start talking to myself, my friends start to worry. When I start talking to Google, I need a […]
SEO Myth: There is No Duplicate Content Penalty
• Friday, January 5th, 2007
This is probably old news to black hats, but I often hear people say “there’s no duplicate content penalty.” Newbies worry they’ll incur some kind of penalty for having identical copyright text across 100 pages or something, and other people like me jump in to alleviate their fears: “Google doesn’t penalize duplicate content; it filters […]
Why Google Will Not Move Away From PageRank
• Wednesday, December 13th, 2006
Recently, I’ve got a little flak in Google Group Webmaster Help for coming down hard on people in the “Google is broke” camp. Basically, some of them were upset that the supplemental index was based heavily on PageRank because Google’s PageRank paradigm is broken and unfair:
Google may misread the intent of a link. For example, […]
Wackiest Google Site Command Bug I’ve Ever Seen
• Tuesday, October 17th, 2006
Google’s site: command’s been acting up lately, and Googlers are working feverishly day and night I assume trying to fix the damn thing on various Datacenters, but check this out. While running a site: command tonight, I came across this bizarre SERP:
Googe site search returns ginormous serp snippets
Here’s the search url I used (broken in […]
Google Docs Gone Bad - Google, Fix the Tags
• Thursday, October 12th, 2006
Google Docs tags aren’t working right now, which means all the new articles you write in Google Docs will end up in a big disorganized pile of whatev, and you get to go back later (when tags are working again) and clean up the mess. This may not be news to you if you never […]
Lost and Grey’s Anatomy on My Google Calendar
• Thursday, September 28th, 2006
We all read Google blog’s recent announcement about a new Google Calendar feature that lets you add web content events, like “weather forecasts, moon phases, and even Google doodles.” Well, to tell you the truth, I don’t care about all that, though holidays on my calendars I’ve already added. What I desperately need are TV […]
Google Video Still Keeps its Hands Off Pornographic Material
• Monday, August 14th, 2006
Philip Lensen reported a few days ago that “Google Video now* allows you to upload adult videos.” (with the added disclaimer that he’s not sure if the feature is new.) As Jimmy Ruska points out, the “Adult/Mature” option is not new; in fact it’s probably been there from the get-go. UI may have changed, but […]
Robots.txt Not Cumulative
• Monday, August 14th, 2006
Yesterday, gs1md wrote a meaty post on WMW regarding how Googlebot responds to robots.txt containing directives for both User-Agent:* and User-Agent: Googlebot. Both Googleguy and Vanessa Fox dropped by to clarify the situation: When you use both specifications, Googlebot will go with User-Agent: Googlebot.
Google Webmaster Help:robots.txt
Google Webmaster Help: How Do I Block Googlebot?
Google Corrupt Titles and Short META Descriptions
• Tuesday, July 18th, 2006
Marcia on WMW commented yesterday that pages with unlinked style declarations are turning up with corrupt TITLEs. Just Guessing claims he has the same problem but without a style declaration (mine does have a style declaration and their titles are still corrupt). Which brings us back to square one.
The thread got me wondering if having […]
Google Notebook - Error in Loading User Data
• Friday, July 14th, 2006
I admit I love Google Notebook. I’ve got all sorts of clips saved up in this thing as I surf the web. But since 5 pm tonight I’ve been getting this error: “Error in loading user data” and it refuses to go away. Yesterday, gmail refused to load for a few minutes, so I thought […]
Google’s Vanessa Fox Announces NOODP Tag
• Thursday, July 13th, 2006
A couple of hours ago, Vanessa Fox announced a new META tag that allows webmasters to opt out of DMOZ title/descriptions appearing in the SERPs:
<META NAME=”GOOGLEBOT” CONTENT=”NOODP”>
or <META NAME=”ROBOTS” CONTENT=”NOODP”> to cover all search engines.
MSN was of course the first to get the ball rolling on this, back on May 22.
Vanessa also wrote:
The way we […]
SEO 101: Feeding Google Juicy Description Snippets
• Saturday, June 24th, 2006
How can you get Google to pick up the description snippet you want? And why should you care? If you use META descriptions, good. If not, read on. My views on this isn’t authoritative but if you disagree, post a good counterexample.
Don’t use H tags just to make text look bigger. Enuf said.
Avoid wrapping text […]
Corrupt Titles in Google SERP
• Tuesday, June 20th, 2006
I’ve been keeping track of a WMW thread about Google displaying corrupt titles in their new SERP. Basically, Google is tagging on on-page text snippets to the end of the TITLE tag (or element, whatever). Since similar TITLE/description snippets throw a duplicate content flag, this little bug may end up causing major problems for some […]
Google Sitemap Reporting 404s under Summary
• Saturday, May 27th, 2006
Note: This post is a follow up to my earlier post, Googlebot Refreshing Supplementals.
I noticed this morning that Google Sitemap is reporting 404s on the Summary page. The pages Google Sitemap reports missing include some of the pages I’ve been getting emails for since May 20th.
Not all the urls showed up under HTTP errors, just […]
Googlebot Refreshing Supplementals?
• Thursday, May 25th, 2006
Weird 404 error email I sent myself yesterday (url hidden to prevent linking to an adult domain):
HTTP_REFERER: [blank] HTTP_HOST: www.domain.com PHP_SELF: /fgdfgfert4534.html REQUEST_URI /NONEXISTENTURL.html REMOTE_ADDR: 220.127.116.11 TIMESTAMP: 5/24/2006 9:15 PM
Quick explanation: I rigged my dynamic pages, so a request to retrieve “maroon-widget.html” 404s and triggers an email if I don’t have “maroon widget” in my […]
Google Coop Blog RSS to XML Generator
• Wednesday, May 24th, 2006
UPDATE: Read this follow-up post to learn how to feed your Blog RSS to Google Co-op.
Scanning through Google Co-op Group, I came across a Google Co-op XML generator by 1000apps. Unfortunately, it uses words from the title instead of blog categories to generate the XML. So I’m going to have to rewrite it. This is […]
My Next Obsession - Getting a Site Back in the Index
• Tuesday, May 23rd, 2006
For the last two months, I’ve been maintaining a holding pattern with one of my adult sites, to no avail. The general concensus was that something was up at Google and I shouldn’t do anything drastic. But now I’m going to start working on my site again. Till I see some progress, how to get […]
What I’ve been Up To
• Wednesday, May 10th, 2006
I surfed over to Dan Thies’ blog yesterday and noticed he’s been slacking off on his blog, like I have with this one. Matt’s also been absent for a couple of days, though now he seems to be back blogging in full force. I’ve started a couple of new blogs this week, one blog for […]
Keyword surrounding Links
• Friday, April 28th, 2006
Jim Westergren wrote an interesting post in his blog recently. concluding:
both Yahoo and Google uses the text surrounding links in their algo
So, I looked at this test page that has an outgoing link. I took a snippet next to that link and ran a Google search, but the target page refused to come up. What […]
Duplicate Content Revisited
• Sunday, April 16th, 2006
Tedster wrote a meaty post in WMW concerning duplicate content:
Google tries to select the dupes and then put all but one of them into the “supplemental index”. If a domain has just a few instances of duplication like this in the Google index, things tend to go on as normal. But when many, many urls […]
Supplemental Test Update April 13th, 2006
• Thursday, April 13th, 2006
I’ve been generally keeping up with WMW posts and other SEO blogs and even kept webmaster radio running since noon today, but nothing is really grabbing my interest. I do have a pile of blog post drafts sitting around unfinished, though some of it is so specific to my domains that I’m not sure who […]
Googlebot Slowing to a Crawl
• Friday, April 7th, 2006
I thought I was the only one stuck in supplemental hell with no Googlebot to rectify the problem/pick up new pages but according to a thread in WMW it seems I’m not the only one experiencing this problem. Some guys report last visit by Googlebot at the end of March, which is pretty much the […]
Why I don’t Like Index.html
• Saturday, April 1st, 2006
The obvious answer is supplementals in Google. I’ve used index.html on about 3 of my domains and since I use Dreamweaver, and sooner or later I make the mistake of linking to a page using /index.html and boom… Google will index it. Even this domain has /index.html for root url. It’s a good thing that’s […]
Does Google’s Spam Reports Work?
• Monday, March 13th, 2006
A few days ago, Matt posted a request on his blog for people to file spam reports, especially for keyword stuffing and Asian spam sites. After reading that, I had a chat with another webmaster friend who thought spam reports were useless. Her point was that spammers injected hundreds of new spam domains every day, […]
Robots.txt Before Linking Up
• Friday, March 10th, 2006
First thing you should do after you buy a domain is install .htaccess that deals with canonical issues like non-www and /index.html.
Second thing you should do is install a robots.txt that prevents Google from crawling anything except the domain root.
If a hacker decides to submit your urls with a Google url removal tool, this may […]
Wrong SERP Snippet for Cache
• Wednesday, March 8th, 2006
I’ve always assumed title/description snippets displayed in the SERP reflects what’s in Google’s Cache. But now, I’m starting to see at least one page where title/description doesn’t match what’s stored in the cache.
Here’s an example:
The cache of one page I’m looking at (I won’t post the url since its adult related) is dated 3/5/2006, […]
• Monday, March 6th, 2006
I just checked my site at 18.104.22.168 and noticed a huge drop in pages indexed. Either I’m still doing something wrong or Google is hiccupping again.
I need to check my pages on this DC and see how many of my pages including subdomains are indexed correctly.
Since Google keeps falling back to cache from August […]
Google Sitemap FAQ
• Monday, March 6th, 2006
This is my list of things about Google Sitemaps I’m pretty certain about.
“A Sitemap file can contain no more than 50,000 URLs and be no larger than 10MB when uncompressed.” Read more about Google Sitemaps.
Sitemaps are used to let spiders know about pages on the site that is hard for spiders to get to. […]
Google Sitemap Scripts
• Monday, March 6th, 2006
I’m in the habit of writing own scripts when I can, and since I hear alot of positive things about Google Sitemaps, and I didn’t find a script I liked out there, I decided to write a script to crawl one of my domains. Right now, it’s site-specific, since I’m excluding certain paths (e.g. […]