Problem is that GWT apps are single-paged by their nature; they are applications. Though, having way to track user's action would be good. No one cancelled custom-written stuff, but mature analytics tools such as Google Analytics would give you much more functionality.
Fast, and easy. Have not tried it yet - need to get rid of some pending work.
here's link to google groups discussion - http://groups.google.com/group/Google-Web-Toolkit/browse_thread/thread/61912b40a2ca7b2a/
Shortly, you just load analytics script to your single-paged site as external script, and then manually call 'log' method from them to create records in analytics database. Pretty easy, huh?
Showing posts with label SEO. Show all posts
Showing posts with label SEO. Show all posts
Monday, September 14, 2009
Nice thread on GWT and SEO on GWT group.
You might have seen Ian Bambury on the web, with his http://examples.roughian.com/#Home GWT example site.
In this thread Ian describes way his application is indexed. Think this is cool, especially real-world experience in long-running gwt seo :)
Thanks Ian.
You might have seen Ian Bambury on the web, with his http://examples.roughian.com/#Home GWT example site.
In this thread Ian describes way his application is indexed. Think this is cool, especially real-world experience in long-running gwt seo :)
Thanks Ian.
Monday, March 23, 2009
Google Web Toolkit (GWT) Search Engine Optimization (SEO) demo will be soon
I'm going to create a little demo showing how one can make ajax-loaded content indexable by search bots.
Basic idea is to create "secondary site" - static one - showing EXACTLY the same content as primary one. If bot gets to secondary site, it can index it well because of static nature.
If user gets to the specific page crawled by bot, it just gets redirected to the dynamic version of the site, showing the same content.
Here are some key tools I'm going to use:
So, waiting for a demo. Hope I'll have several spare hours to get hands on this interesting stuff.
By the way, there's one interesting problem I can see now: If I put static content inside HTML panel, this would cause problems with navigation and links. For wiki, especially. Any link displayed would lead us out of the dynamic application, which is not desired.
Alternate solution is to put content to the IFRAME element. Don't think I really like it, but...
Basic idea is to create "secondary site" - static one - showing EXACTLY the same content as primary one. If bot gets to secondary site, it can index it well because of static nature.
If user gets to the specific page crawled by bot, it just gets redirected to the dynamic version of the site, showing the same content.
Here are some key tools I'm going to use:
- OpenSymphony SiteMesh - good decorating filter. It would allow us to wrap the entire page into some kind of template. For example, we could put navigation header/footer for bots and old browsers and redirect statement for modern browsers, deciding on user-agent.
- Some Java-based CMS for generating/accessing content. Not decided yet. Even better is to use Wiki engine.
- Google Web Toolkit, surely.
- Tomcat as a servlet/jsp container.
So, waiting for a demo. Hope I'll have several spare hours to get hands on this interesting stuff.
By the way, there's one interesting problem I can see now: If I put static content inside HTML panel, this would cause problems with navigation and links. For wiki, especially. Any link displayed would lead us out of the dynamic application, which is not desired.
Alternate solution is to put content to the IFRAME element. Don't think I really like it, but...
Wednesday, March 18, 2009
Afraid of being banned by google for 'cloacking?'
Cloacking, basically, is idea of presenting different content to normal users and to bots. This malicious technique is used by bad people to increase their ranking at Google, and get traffic, while presenting to user content, which was not really requested by user. And there are some good news in the end of the article :)
Example of Cloacking (do not do like this!)
Here is just a rough example. Imagine, there's a page on the web with url http://exampledomain.url
If Google crawler gets to this page, it is presented with one version of page - clean, structured, full of text, headings, etc. This version of the page has lots of keywords, say 'free software download'. It is ranked good by Google bot.
It is possible to understand, who is 'knocking on the door' - machine or real user.
So, if an internet user comes to this page, he sees a face of some PR company showing up.
This is considered to be cloacking, and site may be removed by Google personnel from Google indices once they discover such bad behavior.
Problems for good guys
Ok, this anti-cloacking technology makes search indexes more clear, readable and reliable. But what if you have fully-dynamical site? Yes, site which is fully powered by javascript, or flash?
If its navigation done with javascript - you have problems.
If your application is build with the Single Page Interface - there's no correct way to make it indexable. (this is how guys from Redmond understand SPI)
Good news (for good guys only)
"The only hard and fast rule is to show Googlebot the exact same thing as your users."
These is very good news (yes, for me it's news. I know that posting has been published @07). This means, that your site will not go banned, if you show the same content for same URLs, but in different way.
So, some links:
Ok. Gone for now - will post something on this topic soon...
Example of Cloacking (do not do like this!)
Here is just a rough example. Imagine, there's a page on the web with url http://exampledomain.url
If Google crawler gets to this page, it is presented with one version of page - clean, structured, full of text, headings, etc. This version of the page has lots of keywords, say 'free software download'. It is ranked good by Google bot.
It is possible to understand, who is 'knocking on the door' - machine or real user.
So, if an internet user comes to this page, he sees a face of some PR company showing up.
This is considered to be cloacking, and site may be removed by Google personnel from Google indices once they discover such bad behavior.
Problems for good guys
Ok, this anti-cloacking technology makes search indexes more clear, readable and reliable. But what if you have fully-dynamical site? Yes, site which is fully powered by javascript, or flash?
If its navigation done with javascript - you have problems.
If your application is build with the Single Page Interface - there's no correct way to make it indexable. (this is how guys from Redmond understand SPI)
Good news (for good guys only)
"The only hard and fast rule is to show Googlebot the exact same thing as your users."
These is very good news (yes, for me it's news. I know that posting has been published @07). This means, that your site will not go banned, if you show the same content for same URLs, but in different way.
So, some links:
Ok. Gone for now - will post something on this topic soon...
SEO and GWT
I'm going to research a bit what's going on the gwt and seo world now...
If you understand you don't know any of those words, GWT stands for Google Web Toolkit - tooling for creating rich web applications, and SEO stands for Search Engine Optimization.
So, getting hands on it...
In this posting, just 2 links to the documents I'm going to work on at first:
If you understand you don't know any of those words, GWT stands for Google Web Toolkit - tooling for creating rich web applications, and SEO stands for Search Engine Optimization.
So, getting hands on it...
In this posting, just 2 links to the documents I'm going to work on at first:
- GWT Google Group posting about SEO and GWT. Quite an old thread, something may have changed from those times. People which are well-known to the GWT society are writing, such as Sanjiv Jivan and Ian Bambury.
- A whitepaper on GWT and SEO from BackBase. I've only started reading it, so can not yet tell my opinion.
Subscribe to:
Posts (Atom)