I came across an interesting article today that mentioned how Google does not check the domain the Analytic code is being loaded from when being accessed. What problem does this cause? Well, you could see inflated traffic numbers, but in actuality, visitors are viewing content on other people's domains. This could cause your ad campaign numbers to be misguided, potential buyers of websites to be duped and a host of other problems. In addition, in the age of the "scraper" site, scrapers that copy all of your code could be inflating your traffic numbers as well. So, how does one verify where the Analytic code is being pulled from?
First, load up your Analytic statistics page and then choose "Content Optimization" - > "Web Design Parameters" - > "Hostnames". This will give you a graph that looks like the following:
From the image, you'll see that 99% of my traffic does in fact come from the hagrin.com domain. Good so far. However, we see a few IP addresses underneath that also show as having generated traffic with my Analytic code embedded on pages on their IP addresses. Someone's stealing my traffic! Well, maybe not. On further inspection, an IP WHOIS shows that those IP addresses are owned by Google meaning that this traffic was generated through their cache system. Therefore, we can conclude that when Google indexes our site, the Google Crawler copies our Analytic code. Ahh Google, the ultimate scraper site.
So why is this information important? The obvious answer is to protect your original content and the integrity of your Analytic information. Once you identify a scraper domain, you should contact the domain's webmaster and then the host claiming DMCA violations - that usually gets everybody's attention right away.
On Sunday, I finished the first ever NYC Half Marathon (sponsored by Nike) in what I would describe as the Tale of Two Races. As if waking up at 3:30am on a Sunday wasn't enough to make me question my sanity, I walked through the pouring rain to the 4:19am train and arrived in the City at around 5 so I could participate in the first ever NYC Half Marathon event.
After grabbing some fruit, I took a cab up to East 86th and Central Park where the race start line was located. I was one of the first 500 people there or so and now had to waste 2 hours before the start of the race. Lots of stretching, people watching and bathroom going filled the first hour before the timed pace corrals starting filling up. To make sure I didn't have to fight my way through 10,000 people, I got as close to the front of the "open" section. And then the waiting began ...
Highlights of this waiting period was watching the Central Park Conservatory workers using digital cameras to take pictures of people going to the bathroom in the woods by the starting line ... which included women just dropping down and not even looking for real cover (which reminds me of the whole Seinfeld "good naked" vs. "bad naked" debate - this was bad naked).
The race began and as expected I had to fight through a lot of slow runners (which was weird because I started right behind the "preferred" starting groups). Immediately, I was introduced to the rolling hills of Central Park which I definitely should have altered my training to accomodate. Even with the hills, I was able to pace my way at a decent clip finishing the first 5K and 10K at better than expected times. At around the 7.5 mile mark, we left the hilly park for a flat run through the City. I was still ok till about Mile 9 when during the middle of 42nd Street the heavens opened up and race #2 started. What was a hard, but still fast and fun run turned into a sloppy, slightly annoying race. My shoes were soaked, my laces come untied a few times and my pace slowed. The run down the West Side was gorgeous though, but the rain put a slight damper on the view.
I finished 1,214 out of 10,241 runners which is respectable considering this was my first timed run at this distance and I didn't train for any of the hills. You can see my full race results here -
Many thanks goes out to my brother Chris and my cousin John who both made the long trek into the City to watch me run and then meet up with me afterwards.
Next up - the Knickerbocker 60K (37+ miles), November 25th.
Happy Birthday Google Talk says the Official Google Blog. After 1 year of existence, Google Talk has really come a long way from basically being a novelty IM client where I could play the "Wumpus Game" (notice that Wumpus is never logged on anymore?) to a "functional" IM system. What really sets Google Talk apart currently is its integration with other Google services and products. Let's see what the GTalk team has accomplished.
Google Talk was integrated directly into your Gmail contacts list which really made IMing an enjoyable experience if no IM client was installed or could be installed on the machine you were using. The Gmail integration far surpasses other web based attempts at IM such as AOL Express which was a nightmare to use and caused random browser crashes. In addition, chats were saved into a Gmail Chat folder and the superior search capabilities of Google were integrated into IM chat logs (something I have used quite a few times). We saw the introduction of a voicemail system, user profile pictures, themes and a porting of Google Talk to work on Blackberry devices (something I am anxious to get installed and working). Finally, we have seen the introduction of file transfers which was made simple and easy to use - a feature that was a make or break item for many IM users. Hopefully, with the most recent additions, we'll see a more widespread adoption of Google Talk.
But where does GTalk need to go tomorrow? Users need the ability to host multi-user chat sessions. Personally, I would like to see an encryption system come standard with every Google Talk install (something that I miss using the Google Talk client instead of my GAIM + Encryption plug-in). Another feature that I could personally benefit from would be a corporate release of Google Talk where chats could be stored locally within a network, VoIP sessions would travel over VPN tunnels and would be searchable using a Google hardware appliance. I'm sure others have a much longer list than I do; however, I love Google Talk's sleekness and lack of "bloat" (read - one of the few Google applications that hasn't been inundated with ads) so hopefully they will be very wary of adding new, unnecessary features.
I know Google Talk may never surpass the AIM user base and that's fine with me - as long as I can convert the people I talk to most over instant messaging. Google Talk remains one of my favorite Google applications and I hope to see additional improvements while keeping it basic, simple and bloat-free.
A few weeks ago Google Checkout had a special promotion where you would get a free Google Checkout T-shirt if you purchased something ($20 or more) from a participating merchant with the Google Checkout system. Today the T-shirt finally arrived with UPS. All the T-shirts come in size Large. Click on the image for a larger view.
Google has released a new Google Base API for developers to make even more applications for the Google Base index. What's interesting about this API post is that Google provided a small demo for developers to start brainstorming possible API implementations. As always, I'll be taking a look at the new API to see if there are potential applications I could build to help Google users.
So, after quite a few months of dealing with "Your copy of Windows may not be Genuine", I finally decided to apply my newly obtained, legal Windows XP product key to my home system. After a quick Google search on how to apply the key, I realized I was in for a major problem - my system has already been patched to Service Pack 2.
Almost all of the tools that you will find in a quick Google Search will not work on SP2 patched systems or are more geared towards providing you with a product key. Since I had a legal product key through MSDN, I wanted the cleanest solution possible. I found that Microsoft has created their own Product Key patching tool that works extremely flawlessly and effortlessly. By following that link, you'll be prompted to download an executable called KeyUpdateTool.exe which, with just a few clicks and a valid XP key, will allow you to change your product key. One word of note - you will need to reboot to have this change take effect.
Hope this helps someone looking for a legal solution to changing their key.
With only a week until the first ever New York City Half Marathon (in which I will be competing), I have been rapidly getting more and more obsessed with running competitively these days. In fact, I even bought a non-computer book today called Ultramarathon Man: Confessions of an All-Night Runner to psych myself up that the human body really can endure more than pain that our brain initially thinks. As I was shopping for this book, I was reminded (by an advertisement) of the partnership created by Apple and Nike to create a tracker that inserts into a runner's shoe and information is transmitted real-time back to your iPod Nano. For those not familiar with the concept, a person would purchase Nike+ shoes specifically designed with a small compartment under the sole to insert a small transmitter. Next, you would attach a receiver to your iPod Nano which will capture the results to your iPod. Finally, when you get back to your computer after your run, you could upload your running results to www.nikeplus.com and track your running and compare to the rest of the Nike+ community. Although I'm a willing adopter of this technology, this solution suffers from many drawbacks and those drawbacks may ultimately prevent me from using the Nike+ system.
1) The iPod Nano Limitation - Although the iPod Nano makes the most sense to design this system for due to its size and weight, many people who workout at my gym own iPods ... but not Nanos (I personally own a 5th generation 60GB Video iPod). Therefore, for me to adopt this system, I would need to purchase another iPod which really doesn't make any sense and is probably the ultimate reason why I won't buy the Nike+ system. If they provided alternative options for other iPod types, I would have bought the Nike+ equipment today in time for the marathon.
2) Does the Cost Justify the Benefits? - When you really think about it, this system doesn't really offer much more than the functionality you would get from a stopwatch, some simple math and a spreadsheet. Now, if you already have an iPod Nano and you were going to buy new running shoes anyway (principle of sunk cost), then the $30 price tag for the "Sport Kit" could be easily justified. However, if you need to buy the shoes, the iPod and the Sport Kit, you're looking at a minimum of a $250 investment (add another $100 if you get the larger capacity iPod Nano). Can you really justify the expense when you can easily track the same information in less expensive manners and you have to wear shoes you many not normally use when running? Is the "e-peen" factor enough to justify the expense as well?
3) Does Nike "Just Do It" for You? - Personally, I wear Asics as my primary running shoe so the "forced" switch to the Nike brand may not be welcomed for a lot of runners including myself. Without partnerships with other running shoe producers, many runners may shy away from this product because they will have to wear shoes they are not comfortable with wearing.
So now that we have pointed out the problems with the Nike+ system, what can Apple and Nike do to resolve these problems and hope for a more widespread adoption?
1) Don't Discriminate - Apple should provide adapters to their full line of iPods and I can't really see the huge hurdle that they face being able to do this. The iPod software cannot differ too greatly across the different versions and the input interface located at the bottom of the iPod is pretty standard across all the iPod versions. If they add support for the rest of their iPod line, I would think they would see more adopters of this product.
2) More Friends, More Allies - Although I do see a very serious (and probably impossible) hurdle here, if Apple could partner up with other running shoe companies, the greater shoe options to runners would also allow for more runners to seriously consider purchasing this product. However, with the whole product being called "Nike+", I'm not sure if there is an exclusivity deal in place.
Unfortunately, even though I would love to purchase something like this, I just can't justify the cost considering I would have to pick up a second iPod to utilize the Nike+ product. Hopefully, Apple will come up with a receiver that will work on their other iPod models in the near future.
Call me "obsessed much" and I would agree with you. My interest in Digg isn't to bash it or degrade it, but to make Digg the most valuable resource for technical information on the Internet due to its sophisticated ranking system, active user base and ability to pull in information from all over the Internet. However, especially since the release of Digg v3, I have become more and more irritated with the way Digg seems to be shaping itself and would like to point out the issues and point out some possible solutions/improvements. I would post this on somewhere other than my blog, but Digg doesn't really give their users a forum to discuss topics like these in-depth (we all hate blog spam). First, what I see "wrong" with Digg -
The "What's Wrong" -
1) Digg's comment meta moderation doesn't work -
- All users have the same weighted voice - if a comment is posted at the beginning that "sounds" smart, uninformed users will think that post is factually correct and the comment will be "dugg" up. In fact, a correction could come later and the correction will not be dugg as high since it wasn't posted in a timely fashion in some instances. In several cases, the incorrect post is difficult to correct due to the number of positive diggs resulting from the placement of the comment, the fact that people mindlessly digg comments that already have a lot of diggs (i.e. "group think"), and that people who previously thought the comment was accurate cannot correct their vote.
- First comments get moderated more - the moderation system is uneven for the order of posts to a story. Due to the huge volume of even first page stories, people rarely have time to read all the comments associated to a story and only the comments near the top of the page are moderated with any type of frequency. I call this phenomenon the "commenting flurry" that occurs when a post hits the front page.
- Wasted diggs - what is the difference between +5 diggs or +125 diggs? Since people browse generally not to see negative posts or minimal negative posts (most browse at +0 or higher), positive diggs past a certain point are superfluous since they don't change user commenting weight or the reader's experience.
- People get "dugg" up for asking "dumb" questions - you should not be dugg up for contributing a simplistic question to the conversation. In some cases, the question is dugg higher than the informative answer. For instance, someone may ask what a technical term means and receives more positive diggs than the actual, very informative answer which is inherently wrong. This phenomenon, plus other comment rating issues, makes browsing comments at a level higher than +0 diggs impossible without losing valuable content that exists within story-based comments.
2) Digging to Read Later - The other day I saw a story hit the front page RSS feed that only had 100 diggs, no comments and the site was experiencing the "Digg" effect (when a site effectively gets DDoSd into submission and the story no longer loads). However, since the topic is interesting, users have started to use Digg as a "social bookmarking" application and effectively throwing off the ranking/weighting system that is at the belly of how Digg works (however, many argue that Digg is in fact a self-proclaimed bookmarking system; although, I would argue the bookmarking mindset directly interferes with how the rest of Digg works). Although a very useful and smart action from the user perspective, this bookmarking action actually throws off the integrity of the Digg ranking/weighting system. A great example of this can be found here. If a story is Dugg to the point of throwing 404 errors and hundreds of people want to bookmark an article based on title alone, what will prevent baseless, incorrect, sensational stories from hitting the front page if they have well-written, interest catching headlines?
The Possible Solutions -
Generally, it's good practice that when you point out a problem that you offer a solution. So, here's the best ideas I came up with over the last month or so while raiding the train to and from work every morning -
1) Users need the ability to create their own authoritative universe - Yikes, that was a mouthful. This idea is an extension of the "Friends and Foes" idea from *gasp* Slashdot *gasp*. However, where Slashdot gets it wrong due to their awful interface and failure to take the idea to the next step, I propose a more dynamic way of altering the way comments are displayed to the user. The idea generally revolves around the following logic:
- User A diggs up or down a comment and/or story of User B's (Edit: As one commenter (ok, the only one) has pointed out, this solution isn't really meant for stories as much as it is the commenting system.)
- Based on those diggs, future comments or stories posted by User B would appear promoted or demoted (i.e. weighted) to User A creating a custom "Digg experience" that evolves as the user contributes to Digg
- This would eliminate the need for users to "ban" problem users and preserve threaded replies to previously banned users (currently, if you ban a user, you will also "lose" the comments made in threaded reply to that banned user)
What problems does this solve? Well, digging up or down posts that have already been firmly entrenched (i.e. have +/- 20 diggs already) are still worthwhile to input your opinion which solves the "wasted diggs" problem. In addition, this could potentially solve the incorrectly dugg up and dumb questions problems as you continually mod up or down other users and your "own universe" on Digg evolves to provide you with the most efficient output possible. Obviously, this would be a feature that you could turn on and off as you may want to see highly rated posts by everyone else, but this feature provides users like myself the ability to shape Digg and create de facto "authoritative sources".
2) Make Further Use of the Friends Feature - As a corollary to the above, if you make someone a friend, a user will not need to go through the process of digging up User B's comments and would just need to make them a friend to weight them positively.
3) Require Users to Click on the Article Link - Before being allowed to Digg an article, force users to have to click on the article link before being able to digg its content. This should at least prevent the "mindless" digg where users digg stories that they haven't at least tried to read (the page could have 404'd but that's near impossible to track). The drawback to this solution would be that this could "destroy" the bookmarking aspect of Digg, but I would argue that it would be a worthwhile feature to separate outside of actually digging an article.
4) Provide Users a Bookmarking Option Separate from Digging a Story - Since many users love Digg for the bookmarking functionality it provides (and solutions should not remove useful features, but improve them), Digg should think about providing a stand alone bookmarking feature that exists outside of digging an article. This would keep the integrity of the weighting/digging system in tact while still allowing users to "tag" an article if they cannot read a story at work or the article's web server is under the Digg Effect.
Hopefully, some of these ideas can pickup momentum and maybe we'll see them implemented for v4. However, I'm worried that one of the best tools I have for keeping up with technological news and information is steadily becoming cluttered and noisy and I have found myself looking for a more efficient alternative. Again, I apologize for the blog spam; however, without a better alternative, this seemed like the best method to get those ideas out there.
Well, I have to say that I am highly impressed with my Blackberry's ability to post new content to my new Drupal CMS. The Drupal developers should be commended for creating a highly functional, yet easy to use mobile interface.
The most impressive functionality has to be the taxonomy dropdowns. I was skeptical that they would work on my Blackberry, but after selecting the dropdown you are prompted with a "Change Option" choice. After selecting that option, you are then technically navigating only the dropdown options. In order to allow users the ability to select multiple dropdown values, checkboxes appear to the left of each value. This functionality really demonstrates how SELECT tags are actually rendered separately from the rest of the page's HTML (you may have noticed this when your dropdowns rendered on top of DHTML menus.
So, my first mobile blog post seems to be a success (although I just noticed that posting markup might prove difficult if not impossible). Once I get the redesign completely done, expect more mobile posts in order to utilize my commute time.
Due to time restrictions and wanting to be able to focus on actually creating content, I have decided to ditch the custom made CMS (to my sadness) and made the switch to Drupal. You'll notice over the upcoming weeks that I will be tweaking the design and the layout and then hopefully getting some additional writers to add content here.
Up next I have:
- Create categories/tags and apply them to all the posts
- Fix the dates on all the posts
- Figure out a way to redirect all the old links to the new content
- Add my "articles" to the site as well
- Fix links on the side and customize the navigation
- Apply a nicer looking theme or create my own
Obviously, this is a pretty huge list so hopefully I will be able to get it all done within a month or so. Any suggestions, let me know.