185.31.17.0/24 subnet of Fastly’s CDN null-routed in Pakistan?

I rely heavily on GitHub and Foursquare every day, the former for work and pleasure, and the latter for keeping a track of where I go through the course of a day. Since yesterday, though, I have been noticing that pages on GitHub have been taking close to an eternity to open, if not completely failing. Even when the page loads, all of the static content is missing, and many other things aren’t working. With FourSquare, I haven’t been able to get a list of places to check in to. Yesterday, I wrote them off as glitches on both Foursquare and GitHub’s network.
 
It was only today that I realized what’s going on. GitHub and Foursquare both rely on Fastly’s CDN services. And, for some reason, Fastly’s CDN services have not been working within Pakistan.
 
The first thing I did was look up Fastly’s website and found that it didn’t open for me. Whoa! GitHub’s not working, Foursquare’s not loading, and now, I can’t get to Fastly.
 
I ran a traceroute to Fastly, and to my utter surprise, the trace ended up with a !X (comms administratively prohibited) response from one of the level3.net routers.
 
$ traceroute fastly.com
traceroute: Warning: fastly.com has multiple addresses; using 216.146.46.10
traceroute to fastly.com (216.146.46.10), 64 hops max, 52 byte packets
[...]
 6 xe-8-1-3.edge4.frankfurt1.level3.net (212.162.25.89) 157.577 ms 158.102 ms 166.088 ms
 7 vlan80.csw3.frankfurt1.level3.net (4.69.154.190) 236.032 ms
 vlan60.csw1.frankfurt1.level3.net (4.69.154.62) 236.247 ms 236.731 ms
 8 ae-72-72.ebr2.frankfurt1.level3.net (4.69.140.21) 236.029 ms 236.606 ms
 ae-62-62.ebr2.frankfurt1.level3.net (4.69.140.17) 236.804 ms
 9 ae-22-22.ebr2.london1.level3.net (4.69.148.189) 236.159 ms
 ae-24-24.ebr2.london1.level3.net (4.69.148.197) 236.017 ms
 ae-23-23.ebr2.london1.level3.net (4.69.148.193) 236.115 ms
10 ae-42-42.ebr1.newyork1.level3.net (4.69.137.70) 235.838 ms
 ae-41-41.ebr1.newyork1.level3.net (4.69.137.66) 236.237 ms
 ae-43-43.ebr1.newyork1.level3.net (4.69.137.74) 235.998 ms
11 ae-91-91.csw4.newyork1.level3.net (4.69.134.78) 235.980 ms
 ae-81-81.csw3.newyork1.level3.net (4.69.134.74) 236.211 ms 235.548 ms
12 ae-23-70.car3.newyork1.level3.net (4.69.155.69) 236.151 ms 235.730 ms
 ae-43-90.car3.newyork1.level3.net (4.69.155.197) 235.768 ms
13 dynamic-net.car3.newyork1.level3.net (4.53.90.150) 236.116 ms 236.453 ms 236.565 ms
14 dynamic-net.car3.newyork1.level3.net (4.53.90.150) 237.399 ms !X 236.225 ms !X 235.870 ms !X

Now, that, I thought, was most odd. Why was level3 prohibiting the trace?

I went looking for a support contact at Fastly to try and get anything that could explain what was going on. I found their IRC chat room on FreeNode (I spend a lot of time on FreeNode), and didn’t waste time dropping into it. The kind folks there told me that they’d had reports of one of their IP ranges being null-blocked in Pakistan. It was the 185.31.17.0/24 range. I did some network prodding about, and confirmed that that indeed was the subnet I couldn’t get to from within Pakistan.

$ ping -c 1 185.31.18.133
PING 185.31.18.133 (185.31.18.133): 56 data bytes
64 bytes from 185.31.18.133: icmp_seq=0 ttl=55 time=145.194 ms
--- 185.31.18.133 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 145.194/145.194/145.194/0.000 ms

$ ping -c 1 185.31.16.133
PING 185.31.16.133 (185.31.16.133): 56 data bytes
64 bytes from 185.31.16.133: icmp_seq=0 ttl=51 time=188.872 ms
--- 185.31.16.133 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 188.872/188.872/188.872/0.000 ms

$ ping -c 1 185.31.17.133
PING 185.31.17.133 (185.31.17.133): 56 data bytes
--- 185.31.17.133 ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss

They also told me they’d had reports of both PTCL and TWA where the range in question was null-routed. They said they didn’t know why it had been null-routed but would appreciate any info the locals could provide.

This is ludicrous. After Wi-Tribe filtering UDP DNS packets to Google’s DNS and OpenDNS servers (which they still do), this is the second absolutely preposterous thing that has pissed me off.

Advertisements

Batman Arkham City on XBox 360

I’m what you can safely call a hardcore gamer. My stretch with gaming goes a long way, starting from a measly Atari computer dad bought me from abroad, migrating to PC gaming, and eventually shifting to the world of consoles leaving behind PC gaming forever. I have owned a little less than half a dozen gaming consoles, played on others at friends’ places. The most games I’ve ever played on a single console would have to be Sony’s PlayStation. It was the first PlayStation that came out. The sheer amount of games I played on it and clocked is just sheer. I couldn’t remember how many even if I forced myself to.

When younger brother sold our PlayStation 2 to buy an acoustic guitar to further his interest in music, my gaming world came to an abrupt halt. For a while after that, I didn’t really play games. It was well after I had started earning that I finally decided to revive my former, dormant self, and went out to buy the XBOX 360. To date, it is the console I have been happily playing games away. And I love it.

A week or so ago, I updated the firmware on the XBOX to the latest one available and got my hands to half a dozen games. Some that I’d care to name are: Batman Arkham City, Gears Of Wars 3, Skyrim, and Fifa 2012. Of particular interest to me was Batman Arkham City. It was one game I had been eagerly anticipating and looking forward to playing.

Half an hour into the game, I instantly fell in love with it. Everything about it. The gameplay in particular reminded me that of Assassin’s Creed 2, another game I thoroughly enjoyed playing. I was also very happy to learn that the game featured a very long story line. I saw myself having fun for a while with this game.

Until something hideous happened. It went on like this. I had the game installed on the hard-disk, so as to be able to preserve the lens on the XBOX. When I first ran the game, I decided to store saved game progress on the hard-disk as well. It all went fine. Until I had to take my XBOX over to a friend’s house for a game night. There we had to remove Batman from the hard-disk to make room for another game. I suspected deleting the game might also remove the saved progress on the disk, but on the assurances of friends, I took the bullet. The next day when I sat about playing Batman at home, I was taken aback to find that my saved progress on the game was gone. Zilch. Poof. There was nothing, as though there had never been anything. I felt incredibly sad. I blamed losing saved progress on the process on the deletion of the game from the disk–though, logically such a blame didn’t make sense.

Dejectedly, I sat about playing the game from the start, this time making sure to save progress on the memory card. I played over a couple of days, reached where I had left off, and moved ahed, happy with my progress.

This morning when I ran the game, I felt the exact same horror I did last week. The saved progress was gone again. It was nowhere to be found on the memory card. I searched everywhere, to no avail. I didn’t know what happened that could have caused it this time around. I yanked out the memory card and slid it into the second memory bay. Nothing. It was gone, without a trace. I felt betrayed. I let the game linger on the start screen, contemplating whether I should start over again. I couldn’t make up my mind.

And then on a whim, I ran a search on google to see if I could find any information on this erratic behaviour. Voila, I did. Many others on XBOX had reported similar issues as well as their resentment over them. This is the search I ran.

As far as I bothered to sift through piles of posts on forums and articles over the web, I couldn’t find any resolution for the problem. If anything it helped me come to a temporary decision: to hold off on playing the game again until a workable solution is found. Holding off isn’t something easy to do. It is such an awesome game, and I so much want to play it through the end. Oh well, off to play Skyrim!

Guide: Deploying web.py on IIS7 using PyISAPIe

I spent the last week travailing away, trying to painfully find a way to deploy a web.py based API on IIS7 using PyISAPIe. As frustrations had begun to mount up, I had nearly decided to give up. Being a die-hard Linux and Mac guy, I despise having to work on Windows. Here I was, not only forced to work on Windows, but to find a solution for a problem that left no leaves unturned in its effort to drive me crazy. As if someone decided all this misery wasn’t quite enough, I had to work with a remote desktop session in order to research, tweak, bang my head, and get things to work. Eventually, I cut through massive frustration and despair, managing to find a satisfactory solution. I almost danced in excitment and relief, letting out all sorts of expletives directed at Windows in general and IIS in particular.

To get back to the important question of deploying a web.py script on IIS7 using PyISAPIe, I will make it such that this guide will list down various steps I took, including snippets of relevant code I changed, to tame the beast. I can only hope that what is below will help a poor, miserable soul looking for help as I did (and found none).

I worked with PyISAPIe because I had successfully deployed multiple Django websites on IIS7 on it. The script in question was going to be a part of another Django website (though acting independently). It only made sense to use PyISAPIe for it as well.

First and foremost, I had to install the web.py module on the system. Having had trouble before with IIS with web.py installed through easy_install, I decided to be safe and installed it from source.. Getting web.py to work with PyISAPIe required a small hack (I notice I may make it sound as though it all came down to me in a dream, but in reality, it took me days to figure it out, and clearly after much anguish and pain). In the file Lib\site-packages\web\wsgi.py lies the following function:

def _is_dev_mode():
    # quick hack to check if the program is running in dev mode.
    if os.environ.has_key('SERVER_SOFTWARE') \
        or os.environ.has_key('PHP_FCGI_CHILDREN') \
        or 'fcgi' in sys.argv or 'fastcgi' in sys.argv \
        or 'mod_wsgi' in sys.argv:
            return False
    return True

In its pristine state, when web.py is imported from a source file through PyISAPIe, an exception is thrown. The exception, while I don’t have the exact message, is about it complaining about sys.argv not having an attribute argv, which reads fishy. Since the function _is_dev_mode() only checks whether web.py is being run in development mode, I thought I didn’t care about it since I wanted everything to run in production mode. I edited the function such that its body would be bypassed, while it returned a False boolean value. It looked like this (the important changes I made are highlighted):

def _is_dev_mode():
    return False
    # quick hack to check if the program is running in dev mode.
    if os.environ.has_key('SERVER_SOFTWARE') \
        or os.environ.has_key('PHP_FCGI_CHILDREN') \
        or 'fcgi' in sys.argv or 'fastcgi' in sys.argv \
        or 'mod_wsgi' in sys.argv:
            return False
    return True

This innocuous little addition did away with the exception.

Next up, I used default Hello World-esque example of web.py found on their site to test the deployment (of course, I went on to use my original API script, which was far too complex to trim down and fit into as an example). I called it code.py (I placed it inside the folder C:\websites\myproject). It looked like this:

  import web
  urls = (
      '/.*', 'hello',
      )
  class hello:
      def GET(self):
          return "Hello, world."
  application = web.application(urls, globals()).wsgifunc()

It was pretty simple. You have to pay particular attention on the call to web.application. I called the wsgifunc() to return a WSGI-compatible function to boot the application. I prefer WSGI.

I set up a website under IIS using the IIS Management Console. Since I was working on a 64-bit server edition of Windows and had chosen to use 32-bit version of Python and all modules, I made sure to enable 32-bit support for the application pool being used for the website. This was important.

I decided to keep the PyISAPIe folder inside the folder where code.py rested. This PyISAPIe folder contained, of import, the PyISAPIe.dll file, and the Http folder. Inside the Http folder, I placed the most important file of all: the Isapi.py. That file could be thought of as the starting point for each request that is made, what glues the Request to the proper Handler and code. I worked with the Examples\WSGI\Isapi.py available as part of PyISAPIe. I tweaked the file to look like this:

from Http.WSGI import RunWSGI
from Http import Env
#from md5 import md5
from hashlib import md5
import imp
import os
import sys
sys.path.append(r"C:\websites\myproject")
from code import application
ScriptHandlers = {
	"/api/": application,
}
def RunScript(Path):
  global ScriptHandlers
  try:
    # attempt to call an already-loaded request function.
    return ScriptHandlers[Path]()
  except KeyError:
    # uses the script path's md5 hash to ensure a unique
    # name - not the best way to do it, but it keeps
    # undesired characters out of the name that will
    # mess up the loading.
    Name = '__'+md5(Path).hexdigest().upper()
    ScriptHandlers[Path] = \
      imp.load_source(Name, Env.SCRIPT_TRANSLATED).Request
    return ScriptHandlers[Path]()
# URL prefixes to map to the roots of each application.
Apps = {
  "/api/" : lambda P: RunWSGI(application),
}
# The main request handler.
def Request():
  # Might be better to do some caching here?
  Name = Env.SCRIPT_NAME
  # Apps might be better off as a tuple-of-tuples,
  # but for the sake of representation I leave it
  # as a dict.
  for App, Handler in Apps.items():
    if Name.startswith(App):
      return Handler(Name)
  # Cause 500 error: there should be a 404 handler, eh?
  raise Exception, "Handler not found."

The important bits to note in the above code are the following:

  • I import application from my code module. I set the PATH to include the directory in which the file code.py is so that the import statement does not complain. (I’ve to admit that the idea of import application and feeding it into RunWSGI came to while I was in the loo.)
  • I defined a script handler which matches the URL prefix I want to associate with my web.py script. (In hindsight, this isn’t necessary, as the RunScript() is not being used in this example).
  • In the Apps dictionary, I again route the URL prefix to the lambda function which actually calls the `RunWSGI` function and feeds it application.
  • I also imported the md5 function from the hashlib module instead of the md5 module as originally defined in the file. This was because Python complained about md5 module being deprecated and suggested instead of use hashlib.

And that’s pretty much it. It worked. I couldn’t believe what I saw on the browser in front of me. I danced around my room (while hurling all kinds of expletives).

There’s a caveat though. If you have specific URLs in your web.py script, as I did in my API script, you will have to modify each of those URLs are add the /api/ prefix to them (or whatever URL prefix you set in the Isapi.py. Without that, web.py will not match any URLs in the file.

What a nightmare! I hope this guide serves to help others.

Thank you for reading. Good bye!

PS: If you want to avoid using PyISAPIe, there is a simpler way of deploying web.py on IIS. It is documented crudely over here.

Browsing on the BlackBerry simulator

     I am not a big fan of BlackBerry smart-phones. I realize that there are a lot of people who can’t seemingly exist without access to their emails virtually all the time, and for those lot, BlackBerry, with its prominent push email feature, is perhaps a better fit than any other smart-phone platforms out there. When it comes to me and my smart-phone usage, I would not go so far as to say that I can’t live without my phone. I can. By every measure, I consider myself a hardcore geek, perhaps more hardcore than most others, but I am by no means a gadget freak. While it will be unfair to say that I absolutely abhor typing on small (semi-) keyboards, I don’t also quite enjoy the experience. When it comes down to typing, I would much rather prefer a full-fledged keyboard. That is why to me a compact laptop is many times more important than a fully equipped smart-phone. (For the curious reader, I own a Nokia E72.)

     For a recent mobile website project that I worked on, I had to face a complaint from the client where the layout of certain pages on the site didn’t look quite as expected on BlackBerry devices. Naturally, I didn’t have a BlackBerry handset nor an easy equivalent to test the issue for myself, so I did what anyone stuck in the same corner as I would do: I went over the BlackBerry developer portal online to look for BlackBerry simulators.

     Unlike the Epoch simulator for Symbian/Nokia phones and the iPhone simulator, the BlackBerry simulators were spread out such that for each possible BlackBerry smart-phone model in existence, there was a simulator available for it. And each one of the download was anywhere from 50 to 150 MB in size.

     I chose the simulator for one of the latest BlackBerry handsets, and downloaded it. Like the Epoch simulator, BlackBerry simulators are Windows-specific, in that, they are available in the form of Windows executable binaries. I didn’t have Windows anywhere in my study, so I had to set up a Windows guest inside VMware Fusion in order to set up the simulator. To cut a long, painful story short, I was able to install the simulator after tirelessly downloading a big Java SDK update, without which the installation wouldn’t continue. And then, I powered up the simulator. I was instantly reminded of the never-ending pain I had to suffer through the hands of the Epoch simulator in my previous life where I used to be a Symbian developer. The BlackBerry simulator took ages to start up. I almost cursed out loud because that fact alone opened up old, deep gashes that I had thought I had patched up for good. I was mistaken. Never in my dreams had I thought of having to deal with such monstrosity ever again. And, to my utter, absolute dismay, here I was.

     Eventually, after what seemed to have been ages since I booted up the simulator, I was able to navigate my way to the BlackBerry browser. I let out a deep sigh and thought that I could now finally concentrate on the problem I set out to tackle. But, no! I couldn’t browse on the BlackBerry browser at all. No amount of fiddling with the 3G and WiFi settings all over the BlackBerry OS got browsing working. From what I could tell, both the 3G and WiFi networks were alive, but there was no traffic flowing through. I almost gave up.

     After groping on the Internet with a wince on my face, I was finally able to find out why. Apparently, by default, the BlackBerry simulator are unable to simulate network connections. In order to do this, you have to download and install an additional, supplementary BlackBerry simulator that is called the BlackBerry MDS Simulator. Once this simulator is up and running, your actual BlackBerry simulator will be able to simulate network connections, browse, and do all sorts of network related functions. Who knew!

     As an aside, there’s also the BlackBerry Email Simulator that simulates messaging functionality.

Front Row glitch with Apple external keyboard

     MacOS has this application called Front Row. When activated, it takes over the entire screen, displaying what is known as the “10-foot user interface” to allow the user to kick back, relax, and watch and listen to videos, podcasts, music, or media content of any kind. If you’ve got a big screen, such as an Apple TV or an Apple cinema display (which if I may add are crazy expensive), with the aid of the Front Row application, you can enjoy a great media centre-esque experience.

     On MacOS, this application, Front Row, is tied to a keyboard shortcut: Command + Escape. Once activated, you can use the arrows keys to navigate your way through the 10-foot interface, selecting whatever media content you may want to look at. A really convenient feature of the Front Row application is its integration with the Apple remote. Essentially, you can sit back and navigate the media centre through nothing but the wireless remote.

     I’ve owned a MacBook for over two years now. Having used the keyboard on the MacBook for nearly all this time, I now find that I can barely type on it without causing myself a great deal of agitation. I’m a touch typist, and naturally when I cannot type both fast and accurate, and when I know for a fact that I can’t not because I don’t have the capacity to do so, but because the keyboard is being the bottleneck and standing in the way, I get frustrated very easily. This unfortunately happens to be the case with the MacBook keyboard now. In order to work around that temporarily, I recently dived without a clear head into a shopping spree and emptied my wallet into buying the Apple extended external keyboard. While it is not really conducive to touch typing (something I find it appropriate to elaborate on in a different article altogether), I am able to get by and get my work done without getting close to having a nervous breakdown.

     Now, here, I should point out that I don’t have substantial evidence to prove this (and to that end, I am groping around for it), but I suspect that the Front Row application and the external Apple keyboard don’t quite play nicely together. I am not a very media-centric person, in that, I am not altogether fond of watching movies so often as many of my friends do, for example, so I have little to no use of the Front Row application. However, since I do use all sorts of keyboard shortcuts to perform different functions across many esoteric applications (including Vim, use of which puts a lot of emphasis on the use of the Escape key), I somehow end up pressing the wrong key combination and activating Front Row unwittingly. But, that’s fine, you may say, because all I then have to do is close Front Row. Yes, well, sort of. My problem is that, with the external keyboard attached, if I accidentally start up Front Row and let it take over the screen, I am unable to exit it. And because the application itself does not have any UI elements that can be clicked at to command the application to quit and because the only keyboard shortcut that happens to be the only known (to me) means of exiting the application stops working in the presence of an external Apple keyboard, because of all that, I get a big problem in my hands. I get stuck with an application that essentially takes over my computer, and I can’t do anything about it. I’ve tried waiting for the screen saver to kick in, hoping that when I make the screen saver go away, I would get control back of my system; I’ve also tried letting the MacBook sleep and wake-up subsequently, but all in vain. The Front Row application simply doesn’t go away, until I am left with no other option but to forcefully reboot the MacBook, losing all my data inside all my running applications, which, I should mention, are functioning normally behind the nauseating Front Row screen.

     I’ve had this happen inadvertently far too many times for me to continue to ignore it. So, I eventually did the first thing I could think up of: Find a way to disable the keyboard shortcut. It turned out to be pretty easy to do. Most of the common keyboard shortcuts, as it turns out, linking to different common applications and actions are defined under the Keyboard Shortcuts section in Keyboard inside Preferences. The shortcut for the bloody Front Row is somewhere in there.

     I am pretty sure there’s something amiss with Front Row and its associated keyboard shortcuts when an external keyboard is attached, but as I said before, I’ve nothing to substantiate that claim. Right now, I am happy and relieved to be able to disable the shortcut.

Ranting about Mercurial (hg) (part 1)

     The day I got a grip of git, I fell in love with it. Yes, I am talking about the famous distributed version control application: git. I was a happy Subversion (svn) user before discovering git. The day to day workflow with git and the way git worked made me hate Subversion. I was totally head over heels in love with git.

     I have come to a point where the first thing I do when I am about to start a new project is to start a git repository for that project. Why? Because it is so easy and conveneient to do. I really like the git workflow and the little tools that come with it. I know that people who complain about git often lament about its lack of good, proper GUI front-ends. While the state of GUI front-ends for git has improved incredibly and drastically over the years, I consider myself a happy command-line user of git. All my interactions with git happen on the command-line. And, I will admit, it does not slow me down or hinder my progress or limit me in any way in what I do. I have managed fairly large projects on git from the command-line, and let me tell you, if you are not afraid of working on the command-line, it is extremely manageable. However, I understand that a lot of people run away from the command-line as if it were the plague. For those people, there are some really good git GUI front-ends available.

     I have to admit that for someone coming from no version control background or centralized version control background (such as Subversion or CVS), the learning curve to git could turn out to be a litle steep. I know that it was for me, especially when it came to getting to grips with concepts involving pushing, pulling, rebasing, merging, etc. I don’t think I will be off-base here to think that a good number of people who use version control software for fairly basic tasks are afraid of merge conflicts in code when collaborating code. I know I was always afraid of it, until I actually finally faced a merge conflict for the first time. That was the last time I felt afraid of it, because I was surprised to find out how easy it was to deal with and (possibly) resolve merge conflicts. There’s no rocket science to it.

     I have been plenty impressed with git that I wince when I am forced to use any other version control. In fact, I refuse to use Subversion, as an example, and try my best to convince people to move to git. Why? Because git is just that good. I should probably point the reader to this lovely article that puts down severals reasons that explain why git is better that a number of other popular version control applications out there.

     Recently, at work, I have had to deal with the beast they call Mercurial (hg). Yes, it is another famous distributed version control system out there. I was furious at the decision of my peers to select Mercurial for work-related use, and tried my best to convince them to use git instead, but they didn’t give in to my attempts to persaude them. I had no choice but to unwilling embrace the beast and work with it.

     Instantly, I had big gripes with the way Mercurial worked as well as its general workflow. Where I fell in love with how git tracked content and not files, I hated the fact that Mercurial only tracked files. What I mean by that is if you add a file in a Mercurial repository to be tracked and commit it, and then make any changes to the file, Mercurial would automatically add it to the staging commit. Yes, it wouldn’t let you specify manually which files that were changed should be part of the staging commit, something that git does. A staging commit is a staging area where files that have been changed and have been selected to be part of the next commit are loaded to be ready for commit. With git, you have to manually specify which file to add to the staging commit. And I loved doing it that way. With Mercurial, this was done automatically. This in turn brings me to the second big gripe I had with Mercurial: incremental commits. Git workflow places an indirect emphasis on incremental commits. What are incremental commits? In the version control world, it is considered best practice to perform a commit that is as small as possible and that does not break any existing functionality. So, if for example, you have made changes to ten files to add five different functionalities, that may be independent of each other, but would only like to commit one functionality at a time. That is where incremental commits come in action. With incremental commits, you specify only the files you want to be committed and commit them, instead of committing all the changed files in a bunch and add a general purpose commit message. Incremental commits is a big win for me, and I adore that feature. Because with git you had to manually specify which of the changed files you wanted in the staging commit area, incremental commits came easily and naturally. With Mercurial, where it automatically added all changes to tracked files into the staging commit area, it did not.

     So I asked the obvious question: How do we do incremental commits with Mercurial? Aftering sifting through the logs, I found out that it was indeed possible, but in a very ugly way. The Mercurial commit command takes an extra parameter, -I, that can be used to specify which files explicity to make part of the commit. But that wasn’t it. If you had, say, five files of the ten changed files you wanted to commit, you would have to tag the -I switch behind each of those files, otherwise, Mercurial will fail to include the files in the commit. To this day, this little quirk bites me every now and then.

     I will admit that the one thing I really liked about Mercurial was the built-in web server it came with that could provide a web-based interface to your Mercurial repositories for viewing. This was something that was missing from or quite difficult to do with git. And, this was also one of the biggest reasons why my peers at work decided to go with Mercurial for. They all wanted a convenient way to access the repositories over the web.

To be continued.

Jagged fonts on Snow Leopard on LCD over DVI

     I have a 13″ white MacBook. It has a glossy display that is about the best I have ever used in my life. Great quality, lovely fonts. The only drawback with it is a drawback with glossy displays: it reflects ambiant light which can at times cause problems. But I really don’t mind that. I can work around it easily. Overall, I am really happy with the display on the MacBook.

     The only problem is, 13″ is just not enough. Not all the time. But when you are managing many things side by side, and not least when writing code, you can really get frustrated from not having enough space on the screen. After all, it doesn’t help that you’ve windows hiding behind your current window in focus because there is no space left anywhere on the screen to toss two or three windows side-by-side and still be able to get anything done.

     So, a year ago, I purchased, after careful thought and research, a ViewSonic VX1940w external LCD. The reason I settled on this one, and not any other, is a combination of: a) great price I was getting on it; b) excellent resolution it was offering at that price; c) it sported both VGA and DVI inputs. I really, really wanted an LCD with a DVI input. If you haven’t clicked on the LCD catalog page, this LCD sports a max resolution of 1680×1050 and is 19″ in size and wide-screen. It has a matte display, in that, it does not have the reflection problem the glossy displays suffer from. But it isn’t nearly so crisp in display quality as the glossy.

     If you’ve had the occasion to use any Mac laptop, you’ll know that they don’t have the de facto VGA and DVI input/output ports. Instead they can have any one of mini-DVI, mini-VGA, mini-DisplayPort, or micro-DVI/VGA ports, which by the way are all Apple proprietary stuff. So, how do you connect almost all of the LCDs out there that come with standard VGA and DVI ports and connector wires? You buy VGA and/or DVI adpaters from Apple. These, I should mention, are anything but cheap.

     Naturally, I wanted to buy the DVI adapter for my MacBook. However, I did some research and found bad news in that, a lot of people on the Mac forums reported their LCDs not working with the DVI adapters Apple provides because of the latter being either DVI-I or DVI-D, and most LCDs not supporting one or the other. That scared me. The DVI adapter was expensive to being with, I was not sure whether it would really work with my LCD. So, I took the safe lane out and bought the VGA adapter. And it worked out of the box.

     But, there were problems. These were the sort of problems I documented in a question titled, “Why do I feel a strain on my eyes when I look at the 19″ matte display?“, that I asked on the Super User forum. You may want to skim through it to get the excruciating details of the problems I faced with the VGA display on the LCD. The bottom-line was that that I was told that I had to get the DVI adapter instead, and that because of the jagged fonts on the VGA display and generally sub-standard picture quality, my eyes were having a hard time adjusting, resulting in strain and headache. Initially, it didn’t make sense to me how that could really be the cause of the problem, and didn’t buy it. But, the fact remained that I couldn’t use the LCD for long without walking off with an incredible burden weighing down on my eyes.

     Finally, I bit the bullet today and bought a mini-DVI to DVI adapter. Having plugged it in, I noticed that the “auto image adjust” functionality on the LCD was disabled. It wasn’t when the VGA was plugged in. Despite changing a couple of settings that I thought might make a difference, I felt that the fonts looked even more jagged, the picture quality worse than before. Having spent some more time staring on the screen, opening up Terminals and editors and windows, I realized it wasn’t merely a feeling. The fonts and the display did look much worse than before. I freaked out. I didn’t know what to do. I rebooted the MacBook in the hopes that when I plug in the LCD again, I will actually surprise myself. To my utter dismay, nothing changed. I was sorely disappointed, and I didn’t know what to do. A feeling of remorse hit me for having spent a fortune on the DVI adapter only to get a crappier picture than I did on VGA.

     With a heavy heart, I started looking around. I skimmed with grief through a couple of similar posts on the forums about how fonts on DVI input on LCD looked jagged, and such. Nothing suggested really helped. And then, I found this blog post, Snow Leopard Font Smoothing. It talked about the exact same problem I was having. And what’s more, it suggested a fix for the problem. The fix was to toggle a global setting on OS X by running the following command on the Terminal.app:

defaults -currentHost write -globalDomain AppleFontSmoothing -int 2

     I did this, but nothing worked. I wasn’t really sure what else I had to do to get this to work, neither did the blog post hinted at anything beyond running that command. After a while, I was convinced that there was no hope for me in this. I had nearly given up when I noticed the fonts on the text on the menu bar. It looked different. It was crisper, better, more beautiful. And then it hit me. I quickly, first of all, quit my Terminal.app, and when I re-opened it, voila! There they were. The lovely fonts I had fallen in love with on the MacBook screen. I restarted one by one most of the applications. Apparently, what little helpful bit that blog post missed out on was that you had to close all your applications and re-run them for the settings to take effect. I imagine an easier way is to simply log off and log back in.

     I am absolutely ecstatic with all this. Not only can I use my big LCD to manage my work in a better and more productive manner, I don’t have to walk away with unbearable headaches and eye-strain. Well, I still have headache and eye-strain, to be honest, but that’s an every-day thing. I am so happy.