Dear Google, there is no such location as “Washington, D.C., DC”
|| 10/25/2010 || 5:42 pm || + Render A Comment || ||
Above is a series of screen grabs showing Google’s webcrawler traversing my blog. From this robot’s data collection, the harvested content makes its way into Google’s servers, and ultimately into your search results.
Did you notice? I switched over to FeedBurner. Eh.
|| 9/16/2009 || 9:03 pm || 1 Comment Rendered || ||
FeedBurner is a blog feed management provider that was originally launched in 2004 and purchased by Google in 2007. It provides custom RSS feeds and management tools to bloggers, podcasters, and other web-based content publishers. Unlike the old system of scattered RSS feeds emanating from this website, by switching over to FeedBurner I can accurately see more information about who is reading my blog entries and possibly make a couple bucks by having Google AdSense ads served alongside my content.
I had thought about switching over to FeedBurner last year, but at the time I hadn’t signed up for AdSense and thought it was just a waste of time fooling with my RSS feeds. However, I was really curious to see exactly how many readers I had obtained over the years and this was the only option that I was aware of that provided this information. I know, for example, that my website receives hundreds and sometimes thousands of visitors each day and most of them simply transverse the archives and go on their merry way. But what about those who currently view my newest content through RSS and never visit my website? That is where FeedBurner comes in….
Last week I spent about 5 hours one evening trying to figure out a way to synch ALL of the feeds on this blog (each category used to have its own feed) with FeedBurner. After reading various blog entries about how other webmasters were able to manually edit their .htaccess code to redirect all of their RSS feeds to FeedBurner and was unable to get my mod rewrite to synch ALL the feeds properly, I gave up. But, alas, I didn’t fully give up, instead I just went with the basic WordPress Plugin that FeedBurner offers and it appears to have done the trick. I should have just gone the plugin route from the beginning, but eh, I wanted to see my own coding capabilities.
By testing the different available feeds in Google Reader, it appears that no matter what format readers were already subscribed to (RSS 1.0, RSS 2.0, ATOM etc.) the feeds have properly synched with FeedBurner. This just means that if some day in the future I decide to not use FeedBurner, there will be little overall change to the current subscribers. Instead, only those who subscribed to FeedBurner itself will need to change their subscription method and all others will not really see much of a change (probably just no ads).
Nonetheless, if you haven’t yet, please adjust your RSS reader to be subscribed to:
Or you can just keep your subscription the same…..
Google Reader’s Featured Reading Lists: Where are the rest of the newspaper journalists?
|| 8/27/2009 || 7:51 pm || + Render A Comment || ||
After logging into Google Reader this afternoon, I was presented with a link that brought me to the page above. It features lists of blogs that journalists, foodies, and tech bloggers read. I decided to go through the entire listing and was struck by the fact that so many of the journalists are from the New York Times….
I think the overall listing is decent, but what about journalists from other newspapers? Most of the journalists & bloggers listed above do not have a daily printed edition of their reporting. Only the New York Times has a daily printed edition. So what about the reporters from the Washington Post, Los Angeles Times, the Boston Globe, etc., who have their writings published each day? I bet they read blogs too. The New York Times might be one of the best & largest daily newspapers in the country, but Google should have reached out for a wider range of journalists from other cities around America.
The Infinite E-mail – An Artistic Potential Security Flaw in Apple’s Mail Application [Inbox Art]
|| 8/12/2009 || 1:53 pm || + Render A Comment || ||
A few weeks ago I found out that Mac users can send fully coded HTML e-mails using Apple’s Mail Application. All one needs to do is open up Safari, go to the page you want to e-mail, and select “Mail Contents of This Page” in the File Menu (see image above). The contents of the page are then automatically pasted into an e-mail that is ready to be sent:
But what if the HTML contains PHP scripts that dynamically load content? The HTML (originally from the Grand Juxtaposition via the front page of my website) calls two PHP scripts that randomly selects two images from two different folders on my website. So when you click on the e-mail in your inbox, two new images are displayed because Apple’s Mail Application runs the PHP scripts:
Notice that the images in the e-mail are different than what was originally sent
Click to view the full-sized image
Lets say the script was malicious and called a website that attempted to download malware. Would this ‘discovery’ be a flaw in Apple’s Mail Application?
So far I have tested this splendid e-mail out by emailing myself the same page to my GMail, Yahoo Mail, and MSN e-mail accounts. With the exception of MSN, which only loaded the foreground graphic and not the background graphic, neither GMail nor Yahoo worked like Apple’s Mail Application. I have not tested it out on Entourage or any other off-line e-mail client programs and I am curious if they’ll run the scripts or not. Regardless, this is probably one of the coolest e-mails ever!
Back in December of last year I found that there had been a page added to my website by a malicious robot and had some fun exploiting the fact that hundreds of people were clicking on fraudulent search engine results. Sure enough, last night it happened again, but unlike last time, I found out WHY it happened.
Unknown to me, on three different websites of mine, there were folders that had incorrect file permissions. Generally speaking, each file and folder on a website has its own set of permissions which allow different users different levels of access. Nearly all of my files and folders have their permissions set to 755, which allows me, and only me, the ability to change the contents of the folders on my website. However, today I discovered that three folders on three different websites had their permissions set 777, which means that ANYONE could write files to these folders. The result was that a malicious robot exploited this lack of security and wrote their own files to my websites.
I found out about this from a random person who informed me that there was a page on my website that was sending people to a page that forces people to download a fake virus scanner that I can assume was rouge malware. I contacted my hosting provider thinking that my website passwords were compromised and the tech support responded with a listing of all the folders on all my websites that contained 777 file permissions.
From there, I went to each of these folders and looked around for the newly added malicious files. Instead of merely deleting the files, I opted to do what I did last time, and replace the malicious code with my own basic HTML file. The result so far has been over a 2,000 people clicking on the fake search results and being brought to a landing page like the one above telling them they should try searching again.
I must say that their hack is pretty simple, but also rather sophisticated. I would not have realized that I was being used to help spread malware unless that person had notified me. They work by using a HUGE list of basic words, then they dynamically create hundreds of new pages that feature the keywords. Finally, Google’s own robots visit the page and enters the hundreds of fake entries into their database. The beauty of this process is that evil geniuses behind the code use one PHP file to dynamically generate hundreds of fake pages that all draw people to their webpage— and now they are coming to my website instead.
Throughout this week I am going to continue to monitor this discovery and analyze the code that was used to generate these pages.
Here is an example of a bad search result from Google:
My page just so happened to be the only page on the Internet with those exact words.
About a week ago I noticed that Google had quietly removed my favorite component of their on-line video services: video ranking. It was an automated service that allowed users to be able to see which videos were viewed the most, blogged the most, and shared the most each day, week, and month. This ranking system offered a unique snapshot of the internet video zeitgeist and oftentimes helped me find videos that I otherwise would not have found.
There was also the ability to seek out the popular videos based on geography through the country search. This allowed me to find videos that were popular in England or Canada and compare them to the popular videos in America. All there is now is Hot Videos, which does not provide the same depth of understanding the other metrics offered.
So why the removal of a popular feature with no note to the public? Well there was a tangentially related note posted back in January on Google’s Blog about the discontinuing support for uploads to Google Video. However, in their FAQ and blog entry there was no wayward mention that the video rankings would be taken away.
I understand that Google Video, the video hosting service, had to spend a great deal of time & money removing copyrighted material on an ongoing basis. With YouTube already having to deal with this, it makes sense to consolidate the video operations within YouTube. But why remove the rankings that cross over to all of the videos hosted by Google, including YouTube? It just doesn’t make sense.
The only answer that I’ve been able to come up with is the suppression of popular videos. By removing the ability of users to see what videos are popular at a given time, Google can prevent users from sharing the popular videos with others. If they want to prevent the next Zeitgeist film or rant about the smelly New World Odor, they have found the perfect way to do so: don’t let people know what is popular through their on-line services. Instead, make them find it themselves through other means.
But why would Google do this? What would be their motive? I really don’t know, but it reminds me of the Samizdat in David Foster Wallace’s Infinite Jest, where Google wants to prevent its users from watching The Entertainment in order to help maintain social cohesion. But, alas, people will always find a way to obtain what they are looking for. The only difference is that it now appears that Google is not being the best search engine it can be.
In summary, I don’t care if Google stops allowing people to upload videos to their Google Video servers, people will find other servers, but don’t remove popular methods of finding video content. I want to know what the most viewed video was yesterday in _____[country]_____. I want to know why ____________ was watched by more people yesterday than any other video on the internet. Google once provided an excellent tool for knowledge discovery through it’s rankings system but has taken it away without a decent reason. So, dear Google, when will you reinstate the video rankings? …And why did you remove them in the first place?
The screen grab above links to what used to be the video ranking page and now forwards visitors to basic Google Video front page.
Hey Google & YouTube, those are not my Senators! I have no Senators!
|| 2/9/2009 || 6:26 pm || Comments Off on Hey Google & YouTube, those are not my Senators! I have no Senators! || ||
In continuance of yesterday’s posting, recently Congress changed their rules to allow Senators and Representatives to utilize YouTube to share information with their constituents. Today I noticed a secondary tragic flaw in their layout. Since the residents of Washington, DC are denied representation in the Senate, the coders at YouTube are using the state of the District of Columbia to show videos from different congressional committees. Instead of incorrectly listing DC as a state, they should include a link to “Committees.” Moreover, as you can see above & below, Google maps remove the words District of Columbia at different scales. This further shows how little YouTube/Google cares about the half a million disenfranchised residents of the District of Columbia.
Hey Google & YouTube, that is not my Representative or Delegate!
|| 2/8/2009 || 6:21 pm || Comments Off on Hey Google & YouTube, that is not my Representative or Delegate! || ||
Recently Congress changed their rules to allow Senators and Representatives to utilize YouTube to share information with their constituents. Today I noticed a tragic flaw in their layout. Since the residents of Washington, DC are denied representation in Congress, the coders at YouTube are using the state of the District of Columbia to show videos from different congressional committees, not from my elected “Shadow” Representative Mike Panetta or Congresswoman Eleanor Holmes Norton. Moreover, as you can see below, the Google maps remove the words District of Columbia at different scales. This further shows how little YouTube/Google cares about the people of the District of Columbia: