185.31.17.0/24 subnet of Fastly’s CDN null-routed in Pakistan?

I rely heavily on GitHub and Foursquare every day, the former for work and pleasure, and the latter for keeping a track of where I go through the course of a day. Since yesterday, though, I have been noticing that pages on GitHub have been taking close to an eternity to open, if not completely failing. Even when the page loads, all of the static content is missing, and many other things aren’t working. With FourSquare, I haven’t been able to get a list of places to check in to. Yesterday, I wrote them off as glitches on both Foursquare and GitHub’s network.
 
It was only today that I realized what’s going on. GitHub and Foursquare both rely on Fastly’s CDN services. And, for some reason, Fastly’s CDN services have not been working within Pakistan.
 
The first thing I did was look up Fastly’s website and found that it didn’t open for me. Whoa! GitHub’s not working, Foursquare’s not loading, and now, I can’t get to Fastly.
 
I ran a traceroute to Fastly, and to my utter surprise, the trace ended up with a !X (comms administratively prohibited) response from one of the level3.net routers.
 
$ traceroute fastly.com
traceroute: Warning: fastly.com has multiple addresses; using 216.146.46.10
traceroute to fastly.com (216.146.46.10), 64 hops max, 52 byte packets
[...]
 6 xe-8-1-3.edge4.frankfurt1.level3.net (212.162.25.89) 157.577 ms 158.102 ms 166.088 ms
 7 vlan80.csw3.frankfurt1.level3.net (4.69.154.190) 236.032 ms
 vlan60.csw1.frankfurt1.level3.net (4.69.154.62) 236.247 ms 236.731 ms
 8 ae-72-72.ebr2.frankfurt1.level3.net (4.69.140.21) 236.029 ms 236.606 ms
 ae-62-62.ebr2.frankfurt1.level3.net (4.69.140.17) 236.804 ms
 9 ae-22-22.ebr2.london1.level3.net (4.69.148.189) 236.159 ms
 ae-24-24.ebr2.london1.level3.net (4.69.148.197) 236.017 ms
 ae-23-23.ebr2.london1.level3.net (4.69.148.193) 236.115 ms
10 ae-42-42.ebr1.newyork1.level3.net (4.69.137.70) 235.838 ms
 ae-41-41.ebr1.newyork1.level3.net (4.69.137.66) 236.237 ms
 ae-43-43.ebr1.newyork1.level3.net (4.69.137.74) 235.998 ms
11 ae-91-91.csw4.newyork1.level3.net (4.69.134.78) 235.980 ms
 ae-81-81.csw3.newyork1.level3.net (4.69.134.74) 236.211 ms 235.548 ms
12 ae-23-70.car3.newyork1.level3.net (4.69.155.69) 236.151 ms 235.730 ms
 ae-43-90.car3.newyork1.level3.net (4.69.155.197) 235.768 ms
13 dynamic-net.car3.newyork1.level3.net (4.53.90.150) 236.116 ms 236.453 ms 236.565 ms
14 dynamic-net.car3.newyork1.level3.net (4.53.90.150) 237.399 ms !X 236.225 ms !X 235.870 ms !X

Now, that, I thought, was most odd. Why was level3 prohibiting the trace?

I went looking for a support contact at Fastly to try and get anything that could explain what was going on. I found their IRC chat room on FreeNode (I spend a lot of time on FreeNode), and didn’t waste time dropping into it. The kind folks there told me that they’d had reports of one of their IP ranges being null-blocked in Pakistan. It was the 185.31.17.0/24 range. I did some network prodding about, and confirmed that that indeed was the subnet I couldn’t get to from within Pakistan.

$ ping -c 1 185.31.18.133
PING 185.31.18.133 (185.31.18.133): 56 data bytes
64 bytes from 185.31.18.133: icmp_seq=0 ttl=55 time=145.194 ms
--- 185.31.18.133 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 145.194/145.194/145.194/0.000 ms

$ ping -c 1 185.31.16.133
PING 185.31.16.133 (185.31.16.133): 56 data bytes
64 bytes from 185.31.16.133: icmp_seq=0 ttl=51 time=188.872 ms
--- 185.31.16.133 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 188.872/188.872/188.872/0.000 ms

$ ping -c 1 185.31.17.133
PING 185.31.17.133 (185.31.17.133): 56 data bytes
--- 185.31.17.133 ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss

They also told me they’d had reports of both PTCL and TWA where the range in question was null-routed. They said they didn’t know why it had been null-routed but would appreciate any info the locals could provide.

This is ludicrous. After Wi-Tribe filtering UDP DNS packets to Google’s DNS and OpenDNS servers (which they still do), this is the second absolutely preposterous thing that has pissed me off.

Batman Arkham City on XBox 360

I’m what you can safely call a hardcore gamer. My stretch with gaming goes a long way, starting from a measly Atari computer dad bought me from abroad, migrating to PC gaming, and eventually shifting to the world of consoles leaving behind PC gaming forever. I have owned a little less than half a dozen gaming consoles, played on others at friends’ places. The most games I’ve ever played on a single console would have to be Sony’s PlayStation. It was the first PlayStation that came out. The sheer amount of games I played on it and clocked is just sheer. I couldn’t remember how many even if I forced myself to.

When younger brother sold our PlayStation 2 to buy an acoustic guitar to further his interest in music, my gaming world came to an abrupt halt. For a while after that, I didn’t really play games. It was well after I had started earning that I finally decided to revive my former, dormant self, and went out to buy the XBOX 360. To date, it is the console I have been happily playing games away. And I love it.

A week or so ago, I updated the firmware on the XBOX to the latest one available and got my hands to half a dozen games. Some that I’d care to name are: Batman Arkham City, Gears Of Wars 3, Skyrim, and Fifa 2012. Of particular interest to me was Batman Arkham City. It was one game I had been eagerly anticipating and looking forward to playing.

Half an hour into the game, I instantly fell in love with it. Everything about it. The gameplay in particular reminded me that of Assassin’s Creed 2, another game I thoroughly enjoyed playing. I was also very happy to learn that the game featured a very long story line. I saw myself having fun for a while with this game.

Until something hideous happened. It went on like this. I had the game installed on the hard-disk, so as to be able to preserve the lens on the XBOX. When I first ran the game, I decided to store saved game progress on the hard-disk as well. It all went fine. Until I had to take my XBOX over to a friend’s house for a game night. There we had to remove Batman from the hard-disk to make room for another game. I suspected deleting the game might also remove the saved progress on the disk, but on the assurances of friends, I took the bullet. The next day when I sat about playing Batman at home, I was taken aback to find that my saved progress on the game was gone. Zilch. Poof. There was nothing, as though there had never been anything. I felt incredibly sad. I blamed losing saved progress on the process on the deletion of the game from the disk–though, logically such a blame didn’t make sense.

Dejectedly, I sat about playing the game from the start, this time making sure to save progress on the memory card. I played over a couple of days, reached where I had left off, and moved ahed, happy with my progress.

This morning when I ran the game, I felt the exact same horror I did last week. The saved progress was gone again. It was nowhere to be found on the memory card. I searched everywhere, to no avail. I didn’t know what happened that could have caused it this time around. I yanked out the memory card and slid it into the second memory bay. Nothing. It was gone, without a trace. I felt betrayed. I let the game linger on the start screen, contemplating whether I should start over again. I couldn’t make up my mind.

And then on a whim, I ran a search on google to see if I could find any information on this erratic behaviour. Voila, I did. Many others on XBOX had reported similar issues as well as their resentment over them. This is the search I ran.

As far as I bothered to sift through piles of posts on forums and articles over the web, I couldn’t find any resolution for the problem. If anything it helped me come to a temporary decision: to hold off on playing the game again until a workable solution is found. Holding off isn’t something easy to do. It is such an awesome game, and I so much want to play it through the end. Oh well, off to play Skyrim!

Guide: Deploying web.py on IIS7 using PyISAPIe

I spent the last week travailing away, trying to painfully find a way to deploy a web.py based API on IIS7 using PyISAPIe. As frustrations had begun to mount up, I had nearly decided to give up. Being a die-hard Linux and Mac guy, I despise having to work on Windows. Here I was, not only forced to work on Windows, but to find a solution for a problem that left no leaves unturned in its effort to drive me crazy. As if someone decided all this misery wasn’t quite enough, I had to work with a remote desktop session in order to research, tweak, bang my head, and get things to work. Eventually, I cut through massive frustration and despair, managing to find a satisfactory solution. I almost danced in excitment and relief, letting out all sorts of expletives directed at Windows in general and IIS in particular.

To get back to the important question of deploying a web.py script on IIS7 using PyISAPIe, I will make it such that this guide will list down various steps I took, including snippets of relevant code I changed, to tame the beast. I can only hope that what is below will help a poor, miserable soul looking for help as I did (and found none).

I worked with PyISAPIe because I had successfully deployed multiple Django websites on IIS7 on it. The script in question was going to be a part of another Django website (though acting independently). It only made sense to use PyISAPIe for it as well.

First and foremost, I had to install the web.py module on the system. Having had trouble before with IIS with web.py installed through easy_install, I decided to be safe and installed it from source.. Getting web.py to work with PyISAPIe required a small hack (I notice I may make it sound as though it all came down to me in a dream, but in reality, it took me days to figure it out, and clearly after much anguish and pain). In the file Lib\site-packages\web\wsgi.py lies the following function:

def _is_dev_mode():
    # quick hack to check if the program is running in dev mode.
    if os.environ.has_key('SERVER_SOFTWARE') \
        or os.environ.has_key('PHP_FCGI_CHILDREN') \
        or 'fcgi' in sys.argv or 'fastcgi' in sys.argv \
        or 'mod_wsgi' in sys.argv:
            return False
    return True

In its pristine state, when web.py is imported from a source file through PyISAPIe, an exception is thrown. The exception, while I don’t have the exact message, is about it complaining about sys.argv not having an attribute argv, which reads fishy. Since the function _is_dev_mode() only checks whether web.py is being run in development mode, I thought I didn’t care about it since I wanted everything to run in production mode. I edited the function such that its body would be bypassed, while it returned a False boolean value. It looked like this (the important changes I made are highlighted):

def _is_dev_mode():
    return False
    # quick hack to check if the program is running in dev mode.
    if os.environ.has_key('SERVER_SOFTWARE') \
        or os.environ.has_key('PHP_FCGI_CHILDREN') \
        or 'fcgi' in sys.argv or 'fastcgi' in sys.argv \
        or 'mod_wsgi' in sys.argv:
            return False
    return True

This innocuous little addition did away with the exception.

Next up, I used default Hello World-esque example of web.py found on their site to test the deployment (of course, I went on to use my original API script, which was far too complex to trim down and fit into as an example). I called it code.py (I placed it inside the folder C:\websites\myproject). It looked like this:

  import web
  urls = (
      '/.*', 'hello',
      )
  class hello:
      def GET(self):
          return "Hello, world."
  application = web.application(urls, globals()).wsgifunc()

It was pretty simple. You have to pay particular attention on the call to web.application. I called the wsgifunc() to return a WSGI-compatible function to boot the application. I prefer WSGI.

I set up a website under IIS using the IIS Management Console. Since I was working on a 64-bit server edition of Windows and had chosen to use 32-bit version of Python and all modules, I made sure to enable 32-bit support for the application pool being used for the website. This was important.

I decided to keep the PyISAPIe folder inside the folder where code.py rested. This PyISAPIe folder contained, of import, the PyISAPIe.dll file, and the Http folder. Inside the Http folder, I placed the most important file of all: the Isapi.py. That file could be thought of as the starting point for each request that is made, what glues the Request to the proper Handler and code. I worked with the Examples\WSGI\Isapi.py available as part of PyISAPIe. I tweaked the file to look like this:

from Http.WSGI import RunWSGI
from Http import Env
#from md5 import md5
from hashlib import md5
import imp
import os
import sys
sys.path.append(r"C:\websites\myproject")
from code import application
ScriptHandlers = {
	"/api/": application,
}
def RunScript(Path):
  global ScriptHandlers
  try:
    # attempt to call an already-loaded request function.
    return ScriptHandlers[Path]()
  except KeyError:
    # uses the script path's md5 hash to ensure a unique
    # name - not the best way to do it, but it keeps
    # undesired characters out of the name that will
    # mess up the loading.
    Name = '__'+md5(Path).hexdigest().upper()
    ScriptHandlers[Path] = \
      imp.load_source(Name, Env.SCRIPT_TRANSLATED).Request
    return ScriptHandlers[Path]()
# URL prefixes to map to the roots of each application.
Apps = {
  "/api/" : lambda P: RunWSGI(application),
}
# The main request handler.
def Request():
  # Might be better to do some caching here?
  Name = Env.SCRIPT_NAME
  # Apps might be better off as a tuple-of-tuples,
  # but for the sake of representation I leave it
  # as a dict.
  for App, Handler in Apps.items():
    if Name.startswith(App):
      return Handler(Name)
  # Cause 500 error: there should be a 404 handler, eh?
  raise Exception, "Handler not found."

The important bits to note in the above code are the following:

  • I import application from my code module. I set the PATH to include the directory in which the file code.py is so that the import statement does not complain. (I’ve to admit that the idea of import application and feeding it into RunWSGI came to while I was in the loo.)
  • I defined a script handler which matches the URL prefix I want to associate with my web.py script. (In hindsight, this isn’t necessary, as the RunScript() is not being used in this example).
  • In the Apps dictionary, I again route the URL prefix to the lambda function which actually calls the `RunWSGI` function and feeds it application.
  • I also imported the md5 function from the hashlib module instead of the md5 module as originally defined in the file. This was because Python complained about md5 module being deprecated and suggested instead of use hashlib.

And that’s pretty much it. It worked. I couldn’t believe what I saw on the browser in front of me. I danced around my room (while hurling all kinds of expletives).

There’s a caveat though. If you have specific URLs in your web.py script, as I did in my API script, you will have to modify each of those URLs are add the /api/ prefix to them (or whatever URL prefix you set in the Isapi.py. Without that, web.py will not match any URLs in the file.

What a nightmare! I hope this guide serves to help others.

Thank you for reading. Good bye!

PS: If you want to avoid using PyISAPIe, there is a simpler way of deploying web.py on IIS. It is documented crudely over here.

Browsing on the BlackBerry simulator

     I am not a big fan of BlackBerry smart-phones. I realize that there are a lot of people who can’t seemingly exist without access to their emails virtually all the time, and for those lot, BlackBerry, with its prominent push email feature, is perhaps a better fit than any other smart-phone platforms out there. When it comes to me and my smart-phone usage, I would not go so far as to say that I can’t live without my phone. I can. By every measure, I consider myself a hardcore geek, perhaps more hardcore than most others, but I am by no means a gadget freak. While it will be unfair to say that I absolutely abhor typing on small (semi-) keyboards, I don’t also quite enjoy the experience. When it comes down to typing, I would much rather prefer a full-fledged keyboard. That is why to me a compact laptop is many times more important than a fully equipped smart-phone. (For the curious reader, I own a Nokia E72.)

     For a recent mobile website project that I worked on, I had to face a complaint from the client where the layout of certain pages on the site didn’t look quite as expected on BlackBerry devices. Naturally, I didn’t have a BlackBerry handset nor an easy equivalent to test the issue for myself, so I did what anyone stuck in the same corner as I would do: I went over the BlackBerry developer portal online to look for BlackBerry simulators.

     Unlike the Epoch simulator for Symbian/Nokia phones and the iPhone simulator, the BlackBerry simulators were spread out such that for each possible BlackBerry smart-phone model in existence, there was a simulator available for it. And each one of the download was anywhere from 50 to 150 MB in size.

     I chose the simulator for one of the latest BlackBerry handsets, and downloaded it. Like the Epoch simulator, BlackBerry simulators are Windows-specific, in that, they are available in the form of Windows executable binaries. I didn’t have Windows anywhere in my study, so I had to set up a Windows guest inside VMware Fusion in order to set up the simulator. To cut a long, painful story short, I was able to install the simulator after tirelessly downloading a big Java SDK update, without which the installation wouldn’t continue. And then, I powered up the simulator. I was instantly reminded of the never-ending pain I had to suffer through the hands of the Epoch simulator in my previous life where I used to be a Symbian developer. The BlackBerry simulator took ages to start up. I almost cursed out loud because that fact alone opened up old, deep gashes that I had thought I had patched up for good. I was mistaken. Never in my dreams had I thought of having to deal with such monstrosity ever again. And, to my utter, absolute dismay, here I was.

     Eventually, after what seemed to have been ages since I booted up the simulator, I was able to navigate my way to the BlackBerry browser. I let out a deep sigh and thought that I could now finally concentrate on the problem I set out to tackle. But, no! I couldn’t browse on the BlackBerry browser at all. No amount of fiddling with the 3G and WiFi settings all over the BlackBerry OS got browsing working. From what I could tell, both the 3G and WiFi networks were alive, but there was no traffic flowing through. I almost gave up.

     After groping on the Internet with a wince on my face, I was finally able to find out why. Apparently, by default, the BlackBerry simulator are unable to simulate network connections. In order to do this, you have to download and install an additional, supplementary BlackBerry simulator that is called the BlackBerry MDS Simulator. Once this simulator is up and running, your actual BlackBerry simulator will be able to simulate network connections, browse, and do all sorts of network related functions. Who knew!

     As an aside, there’s also the BlackBerry Email Simulator that simulates messaging functionality.

Front Row glitch with Apple external keyboard

     MacOS has this application called Front Row. When activated, it takes over the entire screen, displaying what is known as the “10-foot user interface” to allow the user to kick back, relax, and watch and listen to videos, podcasts, music, or media content of any kind. If you’ve got a big screen, such as an Apple TV or an Apple cinema display (which if I may add are crazy expensive), with the aid of the Front Row application, you can enjoy a great media centre-esque experience.

     On MacOS, this application, Front Row, is tied to a keyboard shortcut: Command + Escape. Once activated, you can use the arrows keys to navigate your way through the 10-foot interface, selecting whatever media content you may want to look at. A really convenient feature of the Front Row application is its integration with the Apple remote. Essentially, you can sit back and navigate the media centre through nothing but the wireless remote.

     I’ve owned a MacBook for over two years now. Having used the keyboard on the MacBook for nearly all this time, I now find that I can barely type on it without causing myself a great deal of agitation. I’m a touch typist, and naturally when I cannot type both fast and accurate, and when I know for a fact that I can’t not because I don’t have the capacity to do so, but because the keyboard is being the bottleneck and standing in the way, I get frustrated very easily. This unfortunately happens to be the case with the MacBook keyboard now. In order to work around that temporarily, I recently dived without a clear head into a shopping spree and emptied my wallet into buying the Apple extended external keyboard. While it is not really conducive to touch typing (something I find it appropriate to elaborate on in a different article altogether), I am able to get by and get my work done without getting close to having a nervous breakdown.

     Now, here, I should point out that I don’t have substantial evidence to prove this (and to that end, I am groping around for it), but I suspect that the Front Row application and the external Apple keyboard don’t quite play nicely together. I am not a very media-centric person, in that, I am not altogether fond of watching movies so often as many of my friends do, for example, so I have little to no use of the Front Row application. However, since I do use all sorts of keyboard shortcuts to perform different functions across many esoteric applications (including Vim, use of which puts a lot of emphasis on the use of the Escape key), I somehow end up pressing the wrong key combination and activating Front Row unwittingly. But, that’s fine, you may say, because all I then have to do is close Front Row. Yes, well, sort of. My problem is that, with the external keyboard attached, if I accidentally start up Front Row and let it take over the screen, I am unable to exit it. And because the application itself does not have any UI elements that can be clicked at to command the application to quit and because the only keyboard shortcut that happens to be the only known (to me) means of exiting the application stops working in the presence of an external Apple keyboard, because of all that, I get a big problem in my hands. I get stuck with an application that essentially takes over my computer, and I can’t do anything about it. I’ve tried waiting for the screen saver to kick in, hoping that when I make the screen saver go away, I would get control back of my system; I’ve also tried letting the MacBook sleep and wake-up subsequently, but all in vain. The Front Row application simply doesn’t go away, until I am left with no other option but to forcefully reboot the MacBook, losing all my data inside all my running applications, which, I should mention, are functioning normally behind the nauseating Front Row screen.

     I’ve had this happen inadvertently far too many times for me to continue to ignore it. So, I eventually did the first thing I could think up of: Find a way to disable the keyboard shortcut. It turned out to be pretty easy to do. Most of the common keyboard shortcuts, as it turns out, linking to different common applications and actions are defined under the Keyboard Shortcuts section in Keyboard inside Preferences. The shortcut for the bloody Front Row is somewhere in there.

     I am pretty sure there’s something amiss with Front Row and its associated keyboard shortcuts when an external keyboard is attached, but as I said before, I’ve nothing to substantiate that claim. Right now, I am happy and relieved to be able to disable the shortcut.

Ranting about Mercurial (hg) (part 1)

     The day I got a grip of git, I fell in love with it. Yes, I am talking about the famous distributed version control application: git. I was a happy Subversion (svn) user before discovering git. The day to day workflow with git and the way git worked made me hate Subversion. I was totally head over heels in love with git.

     I have come to a point where the first thing I do when I am about to start a new project is to start a git repository for that project. Why? Because it is so easy and conveneient to do. I really like the git workflow and the little tools that come with it. I know that people who complain about git often lament about its lack of good, proper GUI front-ends. While the state of GUI front-ends for git has improved incredibly and drastically over the years, I consider myself a happy command-line user of git. All my interactions with git happen on the command-line. And, I will admit, it does not slow me down or hinder my progress or limit me in any way in what I do. I have managed fairly large projects on git from the command-line, and let me tell you, if you are not afraid of working on the command-line, it is extremely manageable. However, I understand that a lot of people run away from the command-line as if it were the plague. For those people, there are some really good git GUI front-ends available.

     I have to admit that for someone coming from no version control background or centralized version control background (such as Subversion or CVS), the learning curve to git could turn out to be a litle steep. I know that it was for me, especially when it came to getting to grips with concepts involving pushing, pulling, rebasing, merging, etc. I don’t think I will be off-base here to think that a good number of people who use version control software for fairly basic tasks are afraid of merge conflicts in code when collaborating code. I know I was always afraid of it, until I actually finally faced a merge conflict for the first time. That was the last time I felt afraid of it, because I was surprised to find out how easy it was to deal with and (possibly) resolve merge conflicts. There’s no rocket science to it.

     I have been plenty impressed with git that I wince when I am forced to use any other version control. In fact, I refuse to use Subversion, as an example, and try my best to convince people to move to git. Why? Because git is just that good. I should probably point the reader to this lovely article that puts down severals reasons that explain why git is better that a number of other popular version control applications out there.

     Recently, at work, I have had to deal with the beast they call Mercurial (hg). Yes, it is another famous distributed version control system out there. I was furious at the decision of my peers to select Mercurial for work-related use, and tried my best to convince them to use git instead, but they didn’t give in to my attempts to persaude them. I had no choice but to unwilling embrace the beast and work with it.

     Instantly, I had big gripes with the way Mercurial worked as well as its general workflow. Where I fell in love with how git tracked content and not files, I hated the fact that Mercurial only tracked files. What I mean by that is if you add a file in a Mercurial repository to be tracked and commit it, and then make any changes to the file, Mercurial would automatically add it to the staging commit. Yes, it wouldn’t let you specify manually which files that were changed should be part of the staging commit, something that git does. A staging commit is a staging area where files that have been changed and have been selected to be part of the next commit are loaded to be ready for commit. With git, you have to manually specify which file to add to the staging commit. And I loved doing it that way. With Mercurial, this was done automatically. This in turn brings me to the second big gripe I had with Mercurial: incremental commits. Git workflow places an indirect emphasis on incremental commits. What are incremental commits? In the version control world, it is considered best practice to perform a commit that is as small as possible and that does not break any existing functionality. So, if for example, you have made changes to ten files to add five different functionalities, that may be independent of each other, but would only like to commit one functionality at a time. That is where incremental commits come in action. With incremental commits, you specify only the files you want to be committed and commit them, instead of committing all the changed files in a bunch and add a general purpose commit message. Incremental commits is a big win for me, and I adore that feature. Because with git you had to manually specify which of the changed files you wanted in the staging commit area, incremental commits came easily and naturally. With Mercurial, where it automatically added all changes to tracked files into the staging commit area, it did not.

     So I asked the obvious question: How do we do incremental commits with Mercurial? Aftering sifting through the logs, I found out that it was indeed possible, but in a very ugly way. The Mercurial commit command takes an extra parameter, -I, that can be used to specify which files explicity to make part of the commit. But that wasn’t it. If you had, say, five files of the ten changed files you wanted to commit, you would have to tag the -I switch behind each of those files, otherwise, Mercurial will fail to include the files in the commit. To this day, this little quirk bites me every now and then.

     I will admit that the one thing I really liked about Mercurial was the built-in web server it came with that could provide a web-based interface to your Mercurial repositories for viewing. This was something that was missing from or quite difficult to do with git. And, this was also one of the biggest reasons why my peers at work decided to go with Mercurial for. They all wanted a convenient way to access the repositories over the web.

To be continued.

Jagged fonts on Snow Leopard on LCD over DVI

     I have a 13″ white MacBook. It has a glossy display that is about the best I have ever used in my life. Great quality, lovely fonts. The only drawback with it is a drawback with glossy displays: it reflects ambiant light which can at times cause problems. But I really don’t mind that. I can work around it easily. Overall, I am really happy with the display on the MacBook.

     The only problem is, 13″ is just not enough. Not all the time. But when you are managing many things side by side, and not least when writing code, you can really get frustrated from not having enough space on the screen. After all, it doesn’t help that you’ve windows hiding behind your current window in focus because there is no space left anywhere on the screen to toss two or three windows side-by-side and still be able to get anything done.

     So, a year ago, I purchased, after careful thought and research, a ViewSonic VX1940w external LCD. The reason I settled on this one, and not any other, is a combination of: a) great price I was getting on it; b) excellent resolution it was offering at that price; c) it sported both VGA and DVI inputs. I really, really wanted an LCD with a DVI input. If you haven’t clicked on the LCD catalog page, this LCD sports a max resolution of 1680×1050 and is 19″ in size and wide-screen. It has a matte display, in that, it does not have the reflection problem the glossy displays suffer from. But it isn’t nearly so crisp in display quality as the glossy.

     If you’ve had the occasion to use any Mac laptop, you’ll know that they don’t have the de facto VGA and DVI input/output ports. Instead they can have any one of mini-DVI, mini-VGA, mini-DisplayPort, or micro-DVI/VGA ports, which by the way are all Apple proprietary stuff. So, how do you connect almost all of the LCDs out there that come with standard VGA and DVI ports and connector wires? You buy VGA and/or DVI adpaters from Apple. These, I should mention, are anything but cheap.

     Naturally, I wanted to buy the DVI adapter for my MacBook. However, I did some research and found bad news in that, a lot of people on the Mac forums reported their LCDs not working with the DVI adapters Apple provides because of the latter being either DVI-I or DVI-D, and most LCDs not supporting one or the other. That scared me. The DVI adapter was expensive to being with, I was not sure whether it would really work with my LCD. So, I took the safe lane out and bought the VGA adapter. And it worked out of the box.

     But, there were problems. These were the sort of problems I documented in a question titled, “Why do I feel a strain on my eyes when I look at the 19″ matte display?“, that I asked on the Super User forum. You may want to skim through it to get the excruciating details of the problems I faced with the VGA display on the LCD. The bottom-line was that that I was told that I had to get the DVI adapter instead, and that because of the jagged fonts on the VGA display and generally sub-standard picture quality, my eyes were having a hard time adjusting, resulting in strain and headache. Initially, it didn’t make sense to me how that could really be the cause of the problem, and didn’t buy it. But, the fact remained that I couldn’t use the LCD for long without walking off with an incredible burden weighing down on my eyes.

     Finally, I bit the bullet today and bought a mini-DVI to DVI adapter. Having plugged it in, I noticed that the “auto image adjust” functionality on the LCD was disabled. It wasn’t when the VGA was plugged in. Despite changing a couple of settings that I thought might make a difference, I felt that the fonts looked even more jagged, the picture quality worse than before. Having spent some more time staring on the screen, opening up Terminals and editors and windows, I realized it wasn’t merely a feeling. The fonts and the display did look much worse than before. I freaked out. I didn’t know what to do. I rebooted the MacBook in the hopes that when I plug in the LCD again, I will actually surprise myself. To my utter dismay, nothing changed. I was sorely disappointed, and I didn’t know what to do. A feeling of remorse hit me for having spent a fortune on the DVI adapter only to get a crappier picture than I did on VGA.

     With a heavy heart, I started looking around. I skimmed with grief through a couple of similar posts on the forums about how fonts on DVI input on LCD looked jagged, and such. Nothing suggested really helped. And then, I found this blog post, Snow Leopard Font Smoothing. It talked about the exact same problem I was having. And what’s more, it suggested a fix for the problem. The fix was to toggle a global setting on OS X by running the following command on the Terminal.app:

defaults -currentHost write -globalDomain AppleFontSmoothing -int 2

     I did this, but nothing worked. I wasn’t really sure what else I had to do to get this to work, neither did the blog post hinted at anything beyond running that command. After a while, I was convinced that there was no hope for me in this. I had nearly given up when I noticed the fonts on the text on the menu bar. It looked different. It was crisper, better, more beautiful. And then it hit me. I quickly, first of all, quit my Terminal.app, and when I re-opened it, voila! There they were. The lovely fonts I had fallen in love with on the MacBook screen. I restarted one by one most of the applications. Apparently, what little helpful bit that blog post missed out on was that you had to close all your applications and re-run them for the settings to take effect. I imagine an easier way is to simply log off and log back in.

     I am absolutely ecstatic with all this. Not only can I use my big LCD to manage my work in a better and more productive manner, I don’t have to walk away with unbearable headaches and eye-strain. Well, I still have headache and eye-strain, to be honest, but that’s an every-day thing. I am so happy.

The engineer I met, and lost.

     My brother has a desktop PC, a P-IV, which runs Windows XP. One day his system began exhibiting random freezes. He took it to a computer shop nearby where it was diagnosed that the power supply on the chassis had gone south. The replacement supply, costing around 600 PKR including labour, worked well, until after a few months, the same problem resurfaced.

     Suspecting the RAM this time, I ran the memtest application, that runs all sorts of tests on the RAM sticks installed on the system, found on Ubuntu CDs, leaving the application running for hours. Alas, it detected a couple of erroneous memory positions on one of two RAM sticks on board, not really indicating which one. Not having any additional RAM sticks lying around, I could not test to make certain that the problem was indeed without a doubt the faulty stick and which one. I sat down to search for replacement sticks to buy, shocked at the prices of the rather older type of RAM the mother board on the computer could live with.

     As if out of luck completely, before ordering an expensive pair of sticks, I decided on whim to take the computer to the same shop that had replaced its power supply. And as I would realise, I was very lucky I did that.

     If a computer is booted up with no RAM sticks attached, the system, on POST, beeps twice. This combination beep is a signal that the system failed to find any physical memory on the system. And it is a good thing. If the system, for any reason, does not beep with no physical memory attached, then that is an indication something is wrong somewhere on the mother-board.

     That’s what happened at the computer shop when the engineer there tried to start the system without the RAM sticks. When he told me what I hoped not to hear, that there was something amiss with the mother-board, I assumed almost instantly, to my dismay, that there was no way around it but to buy a new mother-board. I think I must have thought out loud, because, to my most pleasant surprise, the guy debunked my assumption, asking me to leave the system at the shop for him to diagnose the problem at leisure. I complied, not having any choice.

     Four hours passed, and he called me to tell me that a couple of ICs on the board around the area where the RAM slots were had burnt out, and that he could fix them for a meagre 400 PKR. Overly ecstatic at the prospect of not having to buy a new mother-board, I got him to reaffirm to me that that would indeed, completely, certainly, fix the problem straight-up.

     When I went to pick up the system, the guy was busy fixing the board in one congested corner of the shop. He had a small desk cluttered with boards, ICs, RAM sticks, and solder dropped from the soldering iron all over. He was perched on his desk, immaculately using a type of soldering equipment I had not seen before: it was a soldering gun, that blew hot air, much like a woman’s hair blower. If you’ve ever used a needle-pin soldering iron, you will quickly understand how difficult it is to use a gun that blows hot air to melt the solder. On a mother board with many, many ICs soldered close to each other, having a gun blowing hot air all over is problematic. But he did it as if it was an every day routine for him. I was impressed. And I was happy.

     He was busy tending to customers, and I couldn’t get him to steal some time to have a chat. I was, though, able to find out that he was an engineering student at a local engineering university in the area, and worked part-time at the shop. He gave me his card, telling me with a smile on his face that I could contact him any time I had a problem. I thanked him profoundly and left.

     A month ago, I was deeply saddened to find out that the shop I had been to was no more there, replaced by another, different shop. And the card he had given me, I was unfortunate and clumsy enough to have misplaced.

     Oh, well.

Saved on a technicality

If your god is forever forgiving, provided that you bow only before him and consider him unparalleled, unchallenged, and single, would you indulge in petty (and non-trivial but non-cardinal) sins and deeds on the belief that at the end of your day you will be forgiven if you so seek forgiveness for your sins from your god?

Would you, then, intensify your indulgences as well as your acts of supplication in the holy month of Ramadan, a scared period of thirty (or more, or less) days where your god is known by you to be at his most merciful and forgiving demeanour?

Would you not think, on reflection, that what is a trivial lie, a dishonest dealing in trade, but a small mistake that your god will not only not mind but also exonerate you from if you spread your arms and submit to him with all your heart?

Of bubbles and bitter experiences!

When it comes to procrastination (and therefore exaggeration), I stand unparalleled among the many circles I am known to be a part of. It has been a little over a year since my writing about the procedure to follow to disconnect PTCL DSL service and my undeniably firm resolve to sever off the connection, and I have only very recently finally managed to get around to getting my act together.

The exchange building, where I found the DSL office, was anything but a pleasant sight. Dilapidated, the inordinate building worn down through constant neglect over the years recalled similar sights of government offices that I had had the misfortune of being an audience to. Every wall, every floor, every desk and chair, and every roof mounted fan that appeared to be dysfunctional, I could lay my gaze on was layered with dirt and gunge, but what startled me the most was the sight of the two staffers in the small DSL room sitting on chairs that could fall apart any minute and working on two dust-covered computers lying on an old, worn-out desk. My heart sank. I was sweating from the already sizzling weather outside. The room felt hellish. A dusty, half torn portable standing fan was as close as it got to having any hope of relief in the heat. Throughout the ten minutes I had to sit in that room, except for the few moments I spent answering what was asked of me, I kept looking about, reflecting thoroughly.

It is easier if one can phase out and shield oneself from unsavoury, unpleasant circumstances, by maintaining a bubble around oneself. It is an uphill struggle keeping that bubble intact alone, for there are many places and many moments where the frailness of it becomes perceptible.

I digress. The steps I described in the post before remain the same for applying for discontinuation of service, with the exception that I did not have to surrender the equipment as it had been well over a year. Folks from their call centre called me twice the next day to enquire into my reasons for cancelling the connection.

Software patents: How bad they are, and how big companies love them

This post on software patents and copyrights and everything else in between is a means of letting off steam caused by reading news that Apple is taking ideas from commercial softwares being actively sold and trying to get patents for those ideas posing as concepts of their own. Yes: Ideas and concepts Apple has not conceived themselves but would like to legally call their own and demand, if and whenever they like, a royalty from anyone building on those ideas — or, in the worst case scenario, sever competition. Patents are considered evil and bad, and there are good reasons why.

Apple is not the only company who is doing it. Most big companies do it; have done it in the past. It has almost become a trend: big companies openly filching ideas from commercial softwares not their own, and attempting to patent those ideas as their own. For example, here we see Microsoft finally being granted a patent on “Page Up” and “Page Down” keystrokes. As another example, Microsoft owns a patent on the “Tree-View” mode we have come to love in many file-system applications. These are merely examples, and Microsoft and Apple are not the only big companies indulging in such practices.

In the software industry, being able to get a patent for an idea you have conceived that hits of really well is bad enough already that you have big companies going out patenting popular concepts in softwares that aren’t theirs to begin with. Besides giving that big company who did not think of a famous idea but now owns it an unfair advantage to play evil, it severely cripples the ability of other companies in general and developers in particular to be able to build upon that idea in order to build and sell better, bigger products, especially when such an idea is as basic and simple in nature as a window layout — most or all products need to need to build upon that.

One has to understand what a patent (software patent, in particular, in this context) is in order to fully grasp the extent to which they are a threat to, for example and in particular, the software industry. Let’s go over it with a simplistic analogy. I think of a brilliant idea, such as, say, tabbed-window browsing. No-one at this point has thought of it yet. I go out and roll-over a browser which features an implementation of my tabbed-window browsing idea. As set out in the US Copyright Law (and in Copyright Laws elsewhere mostly), any implementation or creation, the moment it is materialised into any tangible form, automatically becomes the property of the individual implementing or creating, and as such, that individual automatically gets the copyrights for it. Now, copyright and patent are two different things. At this point, I have the copyright to my implementation of my idea of tabbed-window browsing — the browser, or at least if we only concentrate on the implementation of the idea, the code that implements the tabbed-window browsing functionality is under my copyright. My idea, however, is not.

Ideas can not be copyrighted. They can be patented though. And that is where patents come in. You cannot copyright an idea, because, according to the US Copyright Law, for anything to be copyrightable, it has to be a work, in a form tangible, of an idea. An idea is not something tangible. That is all fine, but how are patents a threat to the software industry? Let’s imagine, further, that you, a big company with a not-so-great browser product, go out of the way and patent the tabbed-window browsing idea that I thought of. You get the patent, and now you legally own the idea. And, then, you plan to play dirty. Since my browser with the tabbed-window browsing support is gaining popularity at a breath-taking speed, which is more than hurting your browser market penetration, you charge me for patent infringement. Yes. I, with the implementation of the idea of tabbed-window browsing which you now legally own, am infringing on something that you have a patent for. Forget about the moral implications of your getting a patent for what you did not think of, I am committing a crime. And you can drag me to court for it. Easily. At this point, there are two things you can force upon me to choose from to do: Either force all my customers to pay me a royalty fee which in turn I pay back to you, and continue to let my browser remain in business or at least existence, thereby continually paying you a royalty fee for as long as the browser lives; or, force me to pull back my product, and end its life. What is worse perhaps is that that there is barely much of anything I can do about it.

Now, do you see how bad patents are? Fortunately, unlike how copyright applies automatically the moment you create a tangible form of your work, the process to acquire patents is a long and tedious one, which requires filing a patent application at the patent office, waiting for the patent office to approve and grant the patent, and everything in between. However, the brutal fact that you do not have to prove that you are the one in fact who actually thought of an idea in order to get a patent is, most unfortunately, not a requirement for you to get a patent for that idea. That is how big companies manage to run away with patents for ideas that belong to others. Couple that with the threat patents cause I described earlier, and you may see how deadly patents can become if brandished by evil companies to leverage ill-gotten advantages.

As an individual, and not least a software developer, tester, etc, there is little you can do about this, but it helps to know. Let’s turn up a notch for no software patents.

Independence Day

On the eve of Pakistan’s 61st Independence Day, it rained thoroughly throughout the city — perhaps throughout the country. I walked out late in the morning over the wet asphalt on a road that was rebuilt a few months ago to the spot where one of our cars was parked. Dad is in the habit of leaving the windows slightly open for ventilation purposes in the car, and whenever it rains, we always tend to forget about that, subsequently ending up having to ride in a car with wet seats. It was more than drizzling as I slowly paced my way up to the car. The look down the lane during downpour is as breathtakingly enchanting as any one can imagine. There is not a single house along either sides of the lane that is not host to a lush and lively collection of greenery. In rain, it almost looks like a still picture artistically capturing a beautiful landscape. The pitch dark velvet worn by the asphalt as a result of being drenched in rain water, the aromas of wet sand, dripping flowers, and drenched trees permeating from all corners through the lane paint a reflection that one may be willing to believe can only be that of heaven. Yesterday, as I stood outside my home, I was a lucky guest to that heavenly peek.

The rain picked itself up, and with it, so did my pace towards the car. As the rain drops trickled down my body, I looked up to face the sky straight, standing next to the car. I felt something. It was a feeling I had never felt. Ever. The fact that I had a running fever at the same time I was getting drenched certainly reinforced the feeling. I could almost sense, feel, that the rain falling down relentlessly, the clouds coughing up in shrill sounds, everything around me, they were all mourning — grieving a big loss. Amidst me, I could see heaven. But every element of it seemed without hope, every element of it looked despairing, grief-stricken, as if it was not in the least enjoying the rain, or the chilling weather, but suffering from great sorrow. I shook my head, and with a shock, and as a deep sense of despair overwhelmed me, I realised what it was. I cranked shut the windows, made sure the doors were locked properly, and, giving a passing look through sorrowful eyes down that lane, I quickly trod over rain water back into my house.

Today is Pakistan’s 61st Independence Day. As before, many have chosen to sleep the day off. I wish I could. I look at the few green flags wound high on the roofs of the houses and being thrown about by slight whiffs of wind. I look, and I feel depressed. Thoroughly. Local TV channels all over the place push hard to give off a vibrant sense, feeling, of what they would like to proudly call Independence Day celebrations. To me, it is more melancholic than looking at almost idle flags hanging outside like dismal prisoners on death row waiting to be hanged. I wish I could sleep the day off. I mourned the Independence Day yesterday, alone, out under a crying sky. I was lucky to have been given that moment.

On Pakistan’s 59th Independence Day, I portrayed a less gloomy picture. I need not compare nor contrast how far forward or backwards we have come since then. In the least bit, as a Pakistani — proud, happy, sad, or grieved –, it is your responsibility to know. If you don’t, today is the day to sit down and reflect in all earnestness.

Happy Independence Day!

Epilogue: How to terminate your PTCL Broadband DSL contract

After months of bearing mental torture, limited or no connectivity, and bandwidth and service charges apparently without having a usable service at hand, I have finally decided to throw in the towel and terminate the contract I have for PTCL Brandband DSL service. Now, since the monthly bill for the service comes augmented with the telephone bill, there is no easy way out to cancel the service. From what I gathered from the support folks, roughly here is the tedious procedure:

  • Grab a photocopy of the last paid telephone bill, along with a copy of your NIC.
  • If you still have the receipt from when you got the service, get a copy of that too; if not, then skip it, it is optional.
  • If you have been a subscriber for less than a year, you’ll have to return all the equipment, so pack it up in the box and get it ready.
  • Jot down an application addressed to (I believe) the PTCL DSL officer.
  • Head down to the nearest PTCL exchange office.
  • Hunt down the PTCL DSL officer in there somewhere.
  • Turn over the equipment (if any), application, photocopies of aforementioned documents, fill up a form, sign it up, and return it to the officer.

I think that is about it. I will go through all the hoops next week, so by that time, if I find any incoherency in the steps I have scribbled down, I’d definitely edit it up, and post an update.

I wrote a post a while back about the problems I have been facing with PTCL DSL. Apparently, judging from the responses the post attracted alone and from first-hand conversations I had with people, I am not the only one who has been annoyed the hell out of my mind from issues with PTCL DSL. In my case, I am willing to admit that it is mainly the low Signal-Noise Ratio (SNR) that is severely clamping down the connectivity — and, more often, completely chopping off all connectivity. However, I would like to highlight the fact that I had been using Cyber.net DSL for over an year on the exact same line, and I never had connectivity-related problems with it, despite the fact that on days when it would rain badly and the phone line would be nearly defunct, the DSL would continue to function nearly efficiently.

If you look at the tariff provided by similar DSL providers, such as Max.com, Cyber.net, and look back at the packages PTCL provides at relatively cheaper rates, and consider the problems people have been facing with PTCL DSL in particular, you’d be bent on believing that there is definitely a catch involved.

For what it’s worth, I had my stretch with PTCL DSL, and as much as I would’ve wanted it otherwise, it just didn’t work out between us.

Currently, I am content with WiMax from Wateen. Almost as a rule, wired connectivity is always superior to wireless, but, I suppose, those rules don’t apply here.

Shooting a bird flying high in the blue sky.

Do you believe in destiny? Do you subscribe to the belief that what happens, happens for good, that what bad happens, happens to avert worse from happening? Do you believe that from the point you settle on a decision till the point where you materialise upon it you are in total control of everything that is happening to you and in your life?

When life inflicts pain, yet leaves behind little hints of hope that there’s a reward waiting at the end of the painful ride, do you stop there and get off? Do you wish for a shortcut? Or do you openly ask life to bring on as much pain as it possibly can?

What do you do when you feel weak? What is your source of strength when your faith starts running low? In what do you look for when you get hopeless?

In being pessimistic, in being optimistic, in being practical, where do you draw the line?

What do you do when you have questions but no-one to ask? Where do you look for as a source of contentment when all you want to do is grab life and beat it with a club despite knowing it won’t make you any content?

Do you beg for help, support, yet quickly shun it away when you sense it coming, only because you want to suffer every bit of the pain alone?

At what point do you snap? At which point do you cease believing? At what point do you stop looking for answers?

What do you do when you don’t know what to do?

Someone I know very well once said, and painfully and rightly so, “Waiting is painful. Forgetting is painful. But not knowing which to do is the worst suffering of all.

BSA’s crackdown on software piracy in PK: The 35-day grace period

The title of a brief column on the second page of Dawn’s Metropolitan section today caught my attention by surprise. It read “Deadline for installing illegal software”. That is certainly not what you get to read everyday in Dawn.

According to the column, the Business Software Alliance (which I’ve long known as a group trying hopelessly yet persistently to crack down on software piracy in the subcontinent) granted a 35-day grace period to companies in PK using pirated or illegal software to acquire licensed versions of the software being used. Not only that, the BSA also stated that all past copyright infringement acts will be forgotten if companies opt to purchase licensed software within the grace period and get rid of illegal/pirate software. Today’s column highlights BSA reminding the companies that the grace period ends 30 May and that they will continue the crackdown on piracy from the first of June.

I’ve kept a tab, on and off, on BSA ever since their coming into the picture, but I have to admit I was totally oblivious of this grace period. Despite that, I think the decision to roll out the 35-day grace period with the incentive that all previous copyright infringement cases will be overlooked for individual companies in question if they get rid of the pirated software and quickly buy and start using licensed software is a wonderful way to get companies which are making dough by using illegal software in their business to buy and use licensed software. As with any initiative in PK, it remains to be seen how many companies actually have paid heed to this and moved out over to licensed software, and how many have shrugged it away.

The ground realties leave no doubt in what the reaction of major companies in PK would be to BSA’s 35-day grace period initiative. I would love to see how this turns out still.