185.31.17.0/24 subnet of Fastly’s CDN null-routed in Pakistan?

I rely heavily on GitHub and Foursquare every day, the former for work and pleasure, and the latter for keeping a track of where I go through the course of a day. Since yesterday, though, I have been noticing that pages on GitHub have been taking close to an eternity to open, if not completely failing. Even when the page loads, all of the static content is missing, and many other things aren’t working. With FourSquare, I haven’t been able to get a list of places to check in to. Yesterday, I wrote them off as glitches on both Foursquare and GitHub’s network.
 
It was only today that I realized what’s going on. GitHub and Foursquare both rely on Fastly’s CDN services. And, for some reason, Fastly’s CDN services have not been working within Pakistan.
 
The first thing I did was look up Fastly’s website and found that it didn’t open for me. Whoa! GitHub’s not working, Foursquare’s not loading, and now, I can’t get to Fastly.
 
I ran a traceroute to Fastly, and to my utter surprise, the trace ended up with a !X (comms administratively prohibited) response from one of the level3.net routers.
 
$ traceroute fastly.com
traceroute: Warning: fastly.com has multiple addresses; using 216.146.46.10
traceroute to fastly.com (216.146.46.10), 64 hops max, 52 byte packets
[...]
 6 xe-8-1-3.edge4.frankfurt1.level3.net (212.162.25.89) 157.577 ms 158.102 ms 166.088 ms
 7 vlan80.csw3.frankfurt1.level3.net (4.69.154.190) 236.032 ms
 vlan60.csw1.frankfurt1.level3.net (4.69.154.62) 236.247 ms 236.731 ms
 8 ae-72-72.ebr2.frankfurt1.level3.net (4.69.140.21) 236.029 ms 236.606 ms
 ae-62-62.ebr2.frankfurt1.level3.net (4.69.140.17) 236.804 ms
 9 ae-22-22.ebr2.london1.level3.net (4.69.148.189) 236.159 ms
 ae-24-24.ebr2.london1.level3.net (4.69.148.197) 236.017 ms
 ae-23-23.ebr2.london1.level3.net (4.69.148.193) 236.115 ms
10 ae-42-42.ebr1.newyork1.level3.net (4.69.137.70) 235.838 ms
 ae-41-41.ebr1.newyork1.level3.net (4.69.137.66) 236.237 ms
 ae-43-43.ebr1.newyork1.level3.net (4.69.137.74) 235.998 ms
11 ae-91-91.csw4.newyork1.level3.net (4.69.134.78) 235.980 ms
 ae-81-81.csw3.newyork1.level3.net (4.69.134.74) 236.211 ms 235.548 ms
12 ae-23-70.car3.newyork1.level3.net (4.69.155.69) 236.151 ms 235.730 ms
 ae-43-90.car3.newyork1.level3.net (4.69.155.197) 235.768 ms
13 dynamic-net.car3.newyork1.level3.net (4.53.90.150) 236.116 ms 236.453 ms 236.565 ms
14 dynamic-net.car3.newyork1.level3.net (4.53.90.150) 237.399 ms !X 236.225 ms !X 235.870 ms !X

Now, that, I thought, was most odd. Why was level3 prohibiting the trace?

I went looking for a support contact at Fastly to try and get anything that could explain what was going on. I found their IRC chat room on FreeNode (I spend a lot of time on FreeNode), and didn’t waste time dropping into it. The kind folks there told me that they’d had reports of one of their IP ranges being null-blocked in Pakistan. It was the 185.31.17.0/24 range. I did some network prodding about, and confirmed that that indeed was the subnet I couldn’t get to from within Pakistan.

$ ping -c 1 185.31.18.133
PING 185.31.18.133 (185.31.18.133): 56 data bytes
64 bytes from 185.31.18.133: icmp_seq=0 ttl=55 time=145.194 ms
--- 185.31.18.133 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 145.194/145.194/145.194/0.000 ms

$ ping -c 1 185.31.16.133
PING 185.31.16.133 (185.31.16.133): 56 data bytes
64 bytes from 185.31.16.133: icmp_seq=0 ttl=51 time=188.872 ms
--- 185.31.16.133 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 188.872/188.872/188.872/0.000 ms

$ ping -c 1 185.31.17.133
PING 185.31.17.133 (185.31.17.133): 56 data bytes
--- 185.31.17.133 ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss

They also told me they’d had reports of both PTCL and TWA where the range in question was null-routed. They said they didn’t know why it had been null-routed but would appreciate any info the locals could provide.

This is ludicrous. After Wi-Tribe filtering UDP DNS packets to Google’s DNS and OpenDNS servers (which they still do), this is the second absolutely preposterous thing that has pissed me off.

There is more to life than …

I came across this short but extremely powerful post on the gapingvoid blog. It not only moved me but forced me to question myself, to ask myself what is it that I’ve done in my life. I thought it needed to be reproduced in full here, with proper attribution.

To paraphrase Seneca, the tragedy isn’t that life is short, the tragedy is that we waste so much of it.

The other types of tragedy, the more violent kind, never worry me too much, thankfully. I never lost much sleep, worrying about wars or serial killers or whatever.

But the thought of getting to the end of my life and realizing that I had wasted most of it, that froze my blood.

As it should…

To this day, I remember that day in the summers of 2008 when I decided to pack and seal my laptop inside my cupboard, and board a one-way plane to Quetta to spend a month with relatives away from everything I did at home, far, far away from any form of technology. All I had when I reached the airport were a bag full of clothes, a book, and my cellphone (I decided to carry so that I could keep in touch with parents).

One month later when I returned, I had an incredibly painful realization dawn on me. I felt I had wasted an entire month of my life doing absolutely nothing. I thought of all the ways in which I could’ve spent that month that would’ve meant something meaningful to me, or of all the productive things I could’ve done that would’ve helped me and/or my never ending quest for knowledge and for doing things that matter. I couldn’t come to terms with the fact that I had wasted over thirty days, doing nothing more than resting, reading a book that was purely fiction, and socializing with limited relatives.

People take breaks, vacations, to cut themselves away from their hectic lives in order to refresh themselves, to revitalize themselves, to save themselves from the risk of burning out. When they come back to their lives after such breaks, they feel energized  and ready to again take on the mountains that lay before them.

I took a vacation, I took a break. I took rest, I cut myself away from technology, from the usual things that made up my hectic life. But when I came back, I didn’t feel energized. I didn’t feel revitalized. I felt regret. I felt slow and lethargic. I felt angry at myself for having wasted so much time doing nothing.

To this day, I live with that regret. I understand that regrets are harmful and best thrown away, but that’s one of those things that are easier said than done.

Guide: Deploying web.py on IIS7 using PyISAPIe

I spent the last week travailing away, trying to painfully find a way to deploy a web.py based API on IIS7 using PyISAPIe. As frustrations had begun to mount up, I had nearly decided to give up. Being a die-hard Linux and Mac guy, I despise having to work on Windows. Here I was, not only forced to work on Windows, but to find a solution for a problem that left no leaves unturned in its effort to drive me crazy. As if someone decided all this misery wasn’t quite enough, I had to work with a remote desktop session in order to research, tweak, bang my head, and get things to work. Eventually, I cut through massive frustration and despair, managing to find a satisfactory solution. I almost danced in excitment and relief, letting out all sorts of expletives directed at Windows in general and IIS in particular.

To get back to the important question of deploying a web.py script on IIS7 using PyISAPIe, I will make it such that this guide will list down various steps I took, including snippets of relevant code I changed, to tame the beast. I can only hope that what is below will help a poor, miserable soul looking for help as I did (and found none).

I worked with PyISAPIe because I had successfully deployed multiple Django websites on IIS7 on it. The script in question was going to be a part of another Django website (though acting independently). It only made sense to use PyISAPIe for it as well.

First and foremost, I had to install the web.py module on the system. Having had trouble before with IIS with web.py installed through easy_install, I decided to be safe and installed it from source.. Getting web.py to work with PyISAPIe required a small hack (I notice I may make it sound as though it all came down to me in a dream, but in reality, it took me days to figure it out, and clearly after much anguish and pain). In the file Lib\site-packages\web\wsgi.py lies the following function:

def _is_dev_mode():
    # quick hack to check if the program is running in dev mode.
    if os.environ.has_key('SERVER_SOFTWARE') \
        or os.environ.has_key('PHP_FCGI_CHILDREN') \
        or 'fcgi' in sys.argv or 'fastcgi' in sys.argv \
        or 'mod_wsgi' in sys.argv:
            return False
    return True

In its pristine state, when web.py is imported from a source file through PyISAPIe, an exception is thrown. The exception, while I don’t have the exact message, is about it complaining about sys.argv not having an attribute argv, which reads fishy. Since the function _is_dev_mode() only checks whether web.py is being run in development mode, I thought I didn’t care about it since I wanted everything to run in production mode. I edited the function such that its body would be bypassed, while it returned a False boolean value. It looked like this (the important changes I made are highlighted):

def _is_dev_mode():
    return False
    # quick hack to check if the program is running in dev mode.
    if os.environ.has_key('SERVER_SOFTWARE') \
        or os.environ.has_key('PHP_FCGI_CHILDREN') \
        or 'fcgi' in sys.argv or 'fastcgi' in sys.argv \
        or 'mod_wsgi' in sys.argv:
            return False
    return True

This innocuous little addition did away with the exception.

Next up, I used default Hello World-esque example of web.py found on their site to test the deployment (of course, I went on to use my original API script, which was far too complex to trim down and fit into as an example). I called it code.py (I placed it inside the folder C:\websites\myproject). It looked like this:

  import web
  urls = (
      '/.*', 'hello',
      )
  class hello:
      def GET(self):
          return "Hello, world."
  application = web.application(urls, globals()).wsgifunc()

It was pretty simple. You have to pay particular attention on the call to web.application. I called the wsgifunc() to return a WSGI-compatible function to boot the application. I prefer WSGI.

I set up a website under IIS using the IIS Management Console. Since I was working on a 64-bit server edition of Windows and had chosen to use 32-bit version of Python and all modules, I made sure to enable 32-bit support for the application pool being used for the website. This was important.

I decided to keep the PyISAPIe folder inside the folder where code.py rested. This PyISAPIe folder contained, of import, the PyISAPIe.dll file, and the Http folder. Inside the Http folder, I placed the most important file of all: the Isapi.py. That file could be thought of as the starting point for each request that is made, what glues the Request to the proper Handler and code. I worked with the Examples\WSGI\Isapi.py available as part of PyISAPIe. I tweaked the file to look like this:

from Http.WSGI import RunWSGI
from Http import Env
#from md5 import md5
from hashlib import md5
import imp
import os
import sys
sys.path.append(r"C:\websites\myproject")
from code import application
ScriptHandlers = {
	"/api/": application,
}
def RunScript(Path):
  global ScriptHandlers
  try:
    # attempt to call an already-loaded request function.
    return ScriptHandlers[Path]()
  except KeyError:
    # uses the script path's md5 hash to ensure a unique
    # name - not the best way to do it, but it keeps
    # undesired characters out of the name that will
    # mess up the loading.
    Name = '__'+md5(Path).hexdigest().upper()
    ScriptHandlers[Path] = \
      imp.load_source(Name, Env.SCRIPT_TRANSLATED).Request
    return ScriptHandlers[Path]()
# URL prefixes to map to the roots of each application.
Apps = {
  "/api/" : lambda P: RunWSGI(application),
}
# The main request handler.
def Request():
  # Might be better to do some caching here?
  Name = Env.SCRIPT_NAME
  # Apps might be better off as a tuple-of-tuples,
  # but for the sake of representation I leave it
  # as a dict.
  for App, Handler in Apps.items():
    if Name.startswith(App):
      return Handler(Name)
  # Cause 500 error: there should be a 404 handler, eh?
  raise Exception, "Handler not found."

The important bits to note in the above code are the following:

  • I import application from my code module. I set the PATH to include the directory in which the file code.py is so that the import statement does not complain. (I’ve to admit that the idea of import application and feeding it into RunWSGI came to while I was in the loo.)
  • I defined a script handler which matches the URL prefix I want to associate with my web.py script. (In hindsight, this isn’t necessary, as the RunScript() is not being used in this example).
  • In the Apps dictionary, I again route the URL prefix to the lambda function which actually calls the `RunWSGI` function and feeds it application.
  • I also imported the md5 function from the hashlib module instead of the md5 module as originally defined in the file. This was because Python complained about md5 module being deprecated and suggested instead of use hashlib.

And that’s pretty much it. It worked. I couldn’t believe what I saw on the browser in front of me. I danced around my room (while hurling all kinds of expletives).

There’s a caveat though. If you have specific URLs in your web.py script, as I did in my API script, you will have to modify each of those URLs are add the /api/ prefix to them (or whatever URL prefix you set in the Isapi.py. Without that, web.py will not match any URLs in the file.

What a nightmare! I hope this guide serves to help others.

Thank you for reading. Good bye!

PS: If you want to avoid using PyISAPIe, there is a simpler way of deploying web.py on IIS. It is documented crudely over here.

Browsing on the BlackBerry simulator

     I am not a big fan of BlackBerry smart-phones. I realize that there are a lot of people who can’t seemingly exist without access to their emails virtually all the time, and for those lot, BlackBerry, with its prominent push email feature, is perhaps a better fit than any other smart-phone platforms out there. When it comes to me and my smart-phone usage, I would not go so far as to say that I can’t live without my phone. I can. By every measure, I consider myself a hardcore geek, perhaps more hardcore than most others, but I am by no means a gadget freak. While it will be unfair to say that I absolutely abhor typing on small (semi-) keyboards, I don’t also quite enjoy the experience. When it comes down to typing, I would much rather prefer a full-fledged keyboard. That is why to me a compact laptop is many times more important than a fully equipped smart-phone. (For the curious reader, I own a Nokia E72.)

     For a recent mobile website project that I worked on, I had to face a complaint from the client where the layout of certain pages on the site didn’t look quite as expected on BlackBerry devices. Naturally, I didn’t have a BlackBerry handset nor an easy equivalent to test the issue for myself, so I did what anyone stuck in the same corner as I would do: I went over the BlackBerry developer portal online to look for BlackBerry simulators.

     Unlike the Epoch simulator for Symbian/Nokia phones and the iPhone simulator, the BlackBerry simulators were spread out such that for each possible BlackBerry smart-phone model in existence, there was a simulator available for it. And each one of the download was anywhere from 50 to 150 MB in size.

     I chose the simulator for one of the latest BlackBerry handsets, and downloaded it. Like the Epoch simulator, BlackBerry simulators are Windows-specific, in that, they are available in the form of Windows executable binaries. I didn’t have Windows anywhere in my study, so I had to set up a Windows guest inside VMware Fusion in order to set up the simulator. To cut a long, painful story short, I was able to install the simulator after tirelessly downloading a big Java SDK update, without which the installation wouldn’t continue. And then, I powered up the simulator. I was instantly reminded of the never-ending pain I had to suffer through the hands of the Epoch simulator in my previous life where I used to be a Symbian developer. The BlackBerry simulator took ages to start up. I almost cursed out loud because that fact alone opened up old, deep gashes that I had thought I had patched up for good. I was mistaken. Never in my dreams had I thought of having to deal with such monstrosity ever again. And, to my utter, absolute dismay, here I was.

     Eventually, after what seemed to have been ages since I booted up the simulator, I was able to navigate my way to the BlackBerry browser. I let out a deep sigh and thought that I could now finally concentrate on the problem I set out to tackle. But, no! I couldn’t browse on the BlackBerry browser at all. No amount of fiddling with the 3G and WiFi settings all over the BlackBerry OS got browsing working. From what I could tell, both the 3G and WiFi networks were alive, but there was no traffic flowing through. I almost gave up.

     After groping on the Internet with a wince on my face, I was finally able to find out why. Apparently, by default, the BlackBerry simulator are unable to simulate network connections. In order to do this, you have to download and install an additional, supplementary BlackBerry simulator that is called the BlackBerry MDS Simulator. Once this simulator is up and running, your actual BlackBerry simulator will be able to simulate network connections, browse, and do all sorts of network related functions. Who knew!

     As an aside, there’s also the BlackBerry Email Simulator that simulates messaging functionality.

Front Row glitch with Apple external keyboard

     MacOS has this application called Front Row. When activated, it takes over the entire screen, displaying what is known as the “10-foot user interface” to allow the user to kick back, relax, and watch and listen to videos, podcasts, music, or media content of any kind. If you’ve got a big screen, such as an Apple TV or an Apple cinema display (which if I may add are crazy expensive), with the aid of the Front Row application, you can enjoy a great media centre-esque experience.

     On MacOS, this application, Front Row, is tied to a keyboard shortcut: Command + Escape. Once activated, you can use the arrows keys to navigate your way through the 10-foot interface, selecting whatever media content you may want to look at. A really convenient feature of the Front Row application is its integration with the Apple remote. Essentially, you can sit back and navigate the media centre through nothing but the wireless remote.

     I’ve owned a MacBook for over two years now. Having used the keyboard on the MacBook for nearly all this time, I now find that I can barely type on it without causing myself a great deal of agitation. I’m a touch typist, and naturally when I cannot type both fast and accurate, and when I know for a fact that I can’t not because I don’t have the capacity to do so, but because the keyboard is being the bottleneck and standing in the way, I get frustrated very easily. This unfortunately happens to be the case with the MacBook keyboard now. In order to work around that temporarily, I recently dived without a clear head into a shopping spree and emptied my wallet into buying the Apple extended external keyboard. While it is not really conducive to touch typing (something I find it appropriate to elaborate on in a different article altogether), I am able to get by and get my work done without getting close to having a nervous breakdown.

     Now, here, I should point out that I don’t have substantial evidence to prove this (and to that end, I am groping around for it), but I suspect that the Front Row application and the external Apple keyboard don’t quite play nicely together. I am not a very media-centric person, in that, I am not altogether fond of watching movies so often as many of my friends do, for example, so I have little to no use of the Front Row application. However, since I do use all sorts of keyboard shortcuts to perform different functions across many esoteric applications (including Vim, use of which puts a lot of emphasis on the use of the Escape key), I somehow end up pressing the wrong key combination and activating Front Row unwittingly. But, that’s fine, you may say, because all I then have to do is close Front Row. Yes, well, sort of. My problem is that, with the external keyboard attached, if I accidentally start up Front Row and let it take over the screen, I am unable to exit it. And because the application itself does not have any UI elements that can be clicked at to command the application to quit and because the only keyboard shortcut that happens to be the only known (to me) means of exiting the application stops working in the presence of an external Apple keyboard, because of all that, I get a big problem in my hands. I get stuck with an application that essentially takes over my computer, and I can’t do anything about it. I’ve tried waiting for the screen saver to kick in, hoping that when I make the screen saver go away, I would get control back of my system; I’ve also tried letting the MacBook sleep and wake-up subsequently, but all in vain. The Front Row application simply doesn’t go away, until I am left with no other option but to forcefully reboot the MacBook, losing all my data inside all my running applications, which, I should mention, are functioning normally behind the nauseating Front Row screen.

     I’ve had this happen inadvertently far too many times for me to continue to ignore it. So, I eventually did the first thing I could think up of: Find a way to disable the keyboard shortcut. It turned out to be pretty easy to do. Most of the common keyboard shortcuts, as it turns out, linking to different common applications and actions are defined under the Keyboard Shortcuts section in Keyboard inside Preferences. The shortcut for the bloody Front Row is somewhere in there.

     I am pretty sure there’s something amiss with Front Row and its associated keyboard shortcuts when an external keyboard is attached, but as I said before, I’ve nothing to substantiate that claim. Right now, I am happy and relieved to be able to disable the shortcut.

Ranting about Mercurial (hg) (part 1)

     The day I got a grip of git, I fell in love with it. Yes, I am talking about the famous distributed version control application: git. I was a happy Subversion (svn) user before discovering git. The day to day workflow with git and the way git worked made me hate Subversion. I was totally head over heels in love with git.

     I have come to a point where the first thing I do when I am about to start a new project is to start a git repository for that project. Why? Because it is so easy and conveneient to do. I really like the git workflow and the little tools that come with it. I know that people who complain about git often lament about its lack of good, proper GUI front-ends. While the state of GUI front-ends for git has improved incredibly and drastically over the years, I consider myself a happy command-line user of git. All my interactions with git happen on the command-line. And, I will admit, it does not slow me down or hinder my progress or limit me in any way in what I do. I have managed fairly large projects on git from the command-line, and let me tell you, if you are not afraid of working on the command-line, it is extremely manageable. However, I understand that a lot of people run away from the command-line as if it were the plague. For those people, there are some really good git GUI front-ends available.

     I have to admit that for someone coming from no version control background or centralized version control background (such as Subversion or CVS), the learning curve to git could turn out to be a litle steep. I know that it was for me, especially when it came to getting to grips with concepts involving pushing, pulling, rebasing, merging, etc. I don’t think I will be off-base here to think that a good number of people who use version control software for fairly basic tasks are afraid of merge conflicts in code when collaborating code. I know I was always afraid of it, until I actually finally faced a merge conflict for the first time. That was the last time I felt afraid of it, because I was surprised to find out how easy it was to deal with and (possibly) resolve merge conflicts. There’s no rocket science to it.

     I have been plenty impressed with git that I wince when I am forced to use any other version control. In fact, I refuse to use Subversion, as an example, and try my best to convince people to move to git. Why? Because git is just that good. I should probably point the reader to this lovely article that puts down severals reasons that explain why git is better that a number of other popular version control applications out there.

     Recently, at work, I have had to deal with the beast they call Mercurial (hg). Yes, it is another famous distributed version control system out there. I was furious at the decision of my peers to select Mercurial for work-related use, and tried my best to convince them to use git instead, but they didn’t give in to my attempts to persaude them. I had no choice but to unwilling embrace the beast and work with it.

     Instantly, I had big gripes with the way Mercurial worked as well as its general workflow. Where I fell in love with how git tracked content and not files, I hated the fact that Mercurial only tracked files. What I mean by that is if you add a file in a Mercurial repository to be tracked and commit it, and then make any changes to the file, Mercurial would automatically add it to the staging commit. Yes, it wouldn’t let you specify manually which files that were changed should be part of the staging commit, something that git does. A staging commit is a staging area where files that have been changed and have been selected to be part of the next commit are loaded to be ready for commit. With git, you have to manually specify which file to add to the staging commit. And I loved doing it that way. With Mercurial, this was done automatically. This in turn brings me to the second big gripe I had with Mercurial: incremental commits. Git workflow places an indirect emphasis on incremental commits. What are incremental commits? In the version control world, it is considered best practice to perform a commit that is as small as possible and that does not break any existing functionality. So, if for example, you have made changes to ten files to add five different functionalities, that may be independent of each other, but would only like to commit one functionality at a time. That is where incremental commits come in action. With incremental commits, you specify only the files you want to be committed and commit them, instead of committing all the changed files in a bunch and add a general purpose commit message. Incremental commits is a big win for me, and I adore that feature. Because with git you had to manually specify which of the changed files you wanted in the staging commit area, incremental commits came easily and naturally. With Mercurial, where it automatically added all changes to tracked files into the staging commit area, it did not.

     So I asked the obvious question: How do we do incremental commits with Mercurial? Aftering sifting through the logs, I found out that it was indeed possible, but in a very ugly way. The Mercurial commit command takes an extra parameter, -I, that can be used to specify which files explicity to make part of the commit. But that wasn’t it. If you had, say, five files of the ten changed files you wanted to commit, you would have to tag the -I switch behind each of those files, otherwise, Mercurial will fail to include the files in the commit. To this day, this little quirk bites me every now and then.

     I will admit that the one thing I really liked about Mercurial was the built-in web server it came with that could provide a web-based interface to your Mercurial repositories for viewing. This was something that was missing from or quite difficult to do with git. And, this was also one of the biggest reasons why my peers at work decided to go with Mercurial for. They all wanted a convenient way to access the repositories over the web.

To be continued.

Carbide needs a rebuild option

     By and large, Carbide is the best IDE for Symbian development available today. Working on the shoulders of Eclipse, Nokia has spent considerable time and effort into customizing Carbide to a point where it makes Symbian development convenient for developers. Having said that, I should also point out that Nokia and Carbide both have a long way to go to provide as great an experience for Symbian development as, for instance, Apple and Xcode have for iPhone development.

     Under the hood, the Carbide camp has adopted what has become second nature for Unix and Linux developers: the “Keep It Simple, Stupid” (KISS) philosophy. Carbide heavily uses a tool-chain to perform a lot of different yet important tasks. That tool-chain includes interpreters such as Perl, compilers such as gcc, build configuration utilities such as make, powerful debuggers such as gdb, etc. When a project is built, or even cleaned, Carbide actively relies on this tool-chain to perform a set of very crucial tasks. And in the traditional Unix/Linux style, those disparate tools are linked together by the use of pipes. So, one tool performs one task that it is designed to do best, spewing its output to another tool to perform another task. This is a great philosophy to follow, I believe, because some of the best open source softwares are utilized. In other words, Carbide, among many other things, does two things in particular: it provides a polished, snazzy front-end that lives on top of powerful, command-line open source tools; and it links together a set of tools to make possible different things. If you have used Linux extensively or programmed on it, you will feel at home as this methodology is not uncommon there.

     Admittedly, I have a plethora of pet peeves with Carbide in general and Symbian development in particular. However, I have decided to gripe about one such annoyance that I have with Carbide. Carbide doesn’t have a “rebuild” option. Anyone reading this may think I am whining about a very trivial thing, but it isn’t trivial at all. Consider the fact that Carbide relies largely on piping together different command-line utilities to perform a disparate set of tasks, and presenting the results on the front-end. While Carbide does it mostly right, it falls short at many places when it comes to clearly showing what went on in the background, when, say, a project build fails due to errors. Building of a Symbian project can be divided into several parts, which Carbide abstracts under the “build” operation. These parts include, but are not limited to, compilation of resource files for UI elements, compilation of source files and linking of resulting object files, creation of binaries, packaging of binaries and resource files, creation of SIS files, signing of SIS files, etc. Carbide tries really hard to keep the user aware of anything that goes wrong at any of those steps during a project build. However, Carbide is not perfect, and as such leaves itself open to subtle problems that can become a cause of increased annoyance for the developer.

     Now, Carbide does have a “clean project” option, which is what I use before building a project every time. But I don’t know how many other developers do so — I did not use to routinely in the past, for example. If you are a Symbian developer, you must have been bitten at least once by the problem that arises from not cleaning your project before every build. Because the build process is divided into a number of different steps, with some steps producing concrete artifacts that are used by the following steps, there is a subtle, ugly problem lurking in there. Consider the step where resource files for the UI elements are built, and also the step where the package for the application/project is built. The latter has to incorporate all pertinent compiled resource files as well as binaries into one final package. And here lies the subtle problem: the tool that is used to create the package does not look at the previous steps to ensure that everything went ok. It can’t. All it does is that it looks into some directory for the presence of the compiled resource files and binaries it is looking for, picks them, and packages them up without thinking about whether the current build failed to produce any resource files or binaries, and that those that it picked are actually not from a previous, now out-dated build (which were left behind from the previous build because the build data wasn’t cleaned). This hideous problem surfaces when, after making adjustments in the UI elements and resource files, any compile time errors are introduced. Now, Carbide gets below par when it comes down to catching some of those errors. For example, for some hideous errors that crop up during resource compilation time, Carbide is only able to catch them when at the far end of the build process, the package manager fails to find compiled resource files because those didn’t compile from earlier. However, if there are older compiled resource files available, and an error is introduced in the resource files which precludes new resources from being built, those errors will be quietly lost because the package manager will quietly and happily pick up the old resource files (as it can’t tell which are new or old, or whether any of the steps that came before it actually passed). As a result, the developer will be left scratching their head for a long while, trying to figure out just why is it that their changes are not being picked up when they run their application. But, it could easily get worse than that.

     So, if you are a developer, and you are not in the habit of cleaning up your projects before compiling every time, you will one day or another get bitten by this hideous bug. And when you do, it will hurt for a long while before you are able to realize where it is hurting and band-aid it.

     This is the reason why I am in favor of Carbide having a “rebuild” option.

Why does wi-tribe connection shut off on the first of every month?

     I have been noticing that on the first of every month, the wi-tribe connection that I am using stops working as soon as I receive the invoice email. Browsing stops. Replies to ping requests stop. All Internet activity comes to a halt.

     When this happens, I get in touch with a customer support rep and describe my problem to them. After a while, they figure out what is wrong, tell me to try again after a brief moment, and take their leave. They never clearly explain the cause of the problem, beyond it being due to some glitch in their systems. But whatever it is that they do, works and makes me happy.

     In the early afternoon on the first of January, my connection stopped completely soon after I received the invoice over email. From that point on till an hour before midnight, I tried relentlessly to get some human to pick up the phone at the customer support site — I had the impression that the support staff got drunk and passed out over the new year’s eve, and didn’t come to work the next day. And when someone finally did, they were not able to fix my problem, promising me that a complaint was lodged and my problem will be resolved shortly.

     However, one question that they asked me during my brief conversation with them on the phone, gave me an idea about what could be wrong. Because I have come not to trust DNS resolvers that local ISPs use, I always use either OpenDNS or, recently, Google DNS resolvers. With that little detail in mind, I edited the network settings on my computer to not use any external DNS resolves but the local ones. My mail didn’t still work, nor did replies to ping requests show up. But what did half-work was any attempt to access any webpage on the browser. The browser redirected automatically to a wi-tribe internal page which told me that my invoice has been released, and this or that will happen if I don’t pay beyond the due-date. However, what caught my attention and also made me feel extremely silly was a big button in the middle of the page that read to the effect, ‘Click to continue browsing’. And clicking on that button did as advertised. I felt extremely stupid.

     So, what was the problem? The problem was that, since I was using external DNS resolvers, wi-tribe was not able to redirect me to their internal page on the first of the month when the invoice was generated. When I switched to their local DNS resolvers, I was able to see that page, click on the big button, and continue using the Internet.

     To think that I wasted a lot of money in calls to customer support, torture myself from being pissed at not being able to both use Internet and get someone at customer support to pick up, only due to a thing as silly as I’ve described, I feel an uncontrollable urge to curse out loud.

The console that burns

In part on a friend’s insistence, I bought the Xbox 360 console a little less than six months ago. I knew before buying it that the Xbox 360 suffers from a scary problem that is notorious by the name of “the red ring of death.” If your Xbox console is unlucky to have the red ring, it will likely stop working forever. There are no fixes, none whatsoever from the console’s manufacturer, Microsoft, to this problem. There are also no reliable precautions to take to avoid having the problem.

A simple search for “xbox” and “the red ring” will lead the curious reader to finding, among other information, the cause of the problem. It is a design fault: a glitch in the hardware which results in desoldering of electrical joints on a particular chip inside the console in the face of persistent heat, heat that the console generates during its normal operation. When the console cools off enough for the solder to settle back, short circuits ensue. If you are lucky, your console may still work.

The red ring problem in the Xbox has caused distress to countless owners. It has single-handedly achieved the greatest console return rates (the return of faulty consoles back to manufacturer after purchase) to date (as I know of, but may likely be off base here). Microsoft acknowledge the problem and the surmounting dissatisfaction caused to its customers, but have done little to solve the problem. They have shipped subsequent models of the Xbox that they tout fix what is a design oversight, but in reality, they have only been able to dampen the problem slightly: the red rings are not gone, but are a tad bit infrequent — a dampening effect that is for the most part too small to notice. In other words, the problem persists by and large.

A precaution to dodge the problem from happening [sooner] that I read as suggested often, and that I follow myself, is to restrict continuous use of the console to less than three hours. This is preposterous. The Xbox is a hardcore gaming console, something that in contrast the Nintendo Wii is not as you are not likely to play games for a long stretch of time in one sitting, and as is characteristic of all hardcore gaming consoles and the hardcore games that are made to be run on them, players play for hours at ends. If you find this fact hard to believe, hunt down a serious gamer (an individual, mostly likely in their teens but not necessarily so, who is mad about playing console games), and spend a day with them, provided that they spend the day playing games and not sleeping through the day. It is not hard to understand how important prolonged gaming, despite the health hazards it carries with it, is to serious as well as mildly serious gamers. Even if this fact is set aside, to not be able to play games on a console for longer than roughly three or so hours from risk of blowing up the console is ludicrously absurd, for a severe lack of a better phrase to describe it. What sort of a console will that be, you may likely ask.

I winced when I heard about Project Natal. My immediate outburst at that was to the effect, “Shouldn’t Microsoft be focusing on fixing the catastrophic problem in their bloody console as their first priority, instead of on introducing, as they tout, revolutionary controller-free gaming to the Xbox?” It makes absolutely no sense to me why they would do that. The inner gamer inside me is crying for a longer, more involved and more persistent gaming experience, as are many, many other gamers who owned or have owned the Xbox. The Xbox is a great gaming platform, with popular game developers committing to releasing awesome games, with an almost robust mechanism for live community play — if only Microsoft would get serious, smack themselves on the back of the head, and set themselves to chasing out from the root the console burn-out issue.

Saved on a technicality

If your god is forever forgiving, provided that you bow only before him and consider him unparalleled, unchallenged, and single, would you indulge in petty (and non-trivial but non-cardinal) sins and deeds on the belief that at the end of your day you will be forgiven if you so seek forgiveness for your sins from your god?

Would you, then, intensify your indulgences as well as your acts of supplication in the holy month of Ramadan, a scared period of thirty (or more, or less) days where your god is known by you to be at his most merciful and forgiving demeanour?

Would you not think, on reflection, that what is a trivial lie, a dishonest dealing in trade, but a small mistake that your god will not only not mind but also exonerate you from if you spread your arms and submit to him with all your heart?

Of logic and conscience!

Inspired by The Value of Logic in Pratical Life, by JayWalker, I decided to rant about logic.

Logic, in practical life, is, more often than not, what we want it to be. As Taleb underlines repeatedly in his book, The Black Swan, people tend to only look for instances that corroborate what they want to believe in. They almost never look for instances that challenge or invalidate their reasoning. And as Taleb says, “these instances are always easy to find.

It is a natural tendency in humans. It is extremely difficult to, once you have come up with a theory for, say, doing something, go out and strenuously look for instances that prove your theory wrong. It is even difficult to continually be on a prowl for such instances. And when faced with such an instance that debunks our theory, we find it hard to accept the fact head-on, and tend to ignore it — after all, enough other instances have confirmed our theory. We also forget, or don’t want to accept, that all it takes to disprove a theory is a single instance that invalidates it. Do we know, at any point in our lives, about all such instances that invalidate something we have come to rigorously believe in? Do what we know, give us an edge over what we don’t know?

Is it, then, even conceivable to apply logic in its entirety to our day to day lives? For all we know, there may be that single instance hiding somewhere that will be discovered some day, shockingly so, that will debunk our reasoning. Can we even affirm with certainty what is conceivable and what is not? The strict principles of validity that form a crucial part of determining logic, could be invalidated at any point in time. That we do not know of any such argument or fact that does, does not mean that such an argument or fact does not exist. We really don’t know what we don’t know.

Conscience forms from having infused a fear, by religion perhaps, of the consequences of committing actions that do not comply with what is considered morally ethical, just, or right — I say, “by religion perhaps”, because atheists tend to exhibit a heavy conscience too. A person with a heavy conscience does not succumb to the reasoning that, “just because everyone else is doing it, so should I”. The resulting burden, if they did, is immensely unbearable. But can what is considered morally ethical, just, or right, be some day proven to not be morally ethical, just, or right? On a given day, you would be given to acknowledge that religious beliefs, perhaps some or perhaps a lot, are really similar in nature to logic: strict principles of validity applied to reasoning to come to some sort of a conclusion. They tend to be.

So, when confronted with an opportunity to break a traffic signal, I hesitate, perhaps not immediately because of fear of being accountable in front of the deity I believe in, but because of the risk of hastily causing an accident (or even causing someone to lose their life). Another individual may exploit the same opportunity without thinking as far ahead. For them, the surety that they would not end up in an accident and also get pulled over, is enough to convince them to do it. If the fear of being accountable to the deity is ignored for a while, the only difference between the two processes of applying logic is that of being sceptical and of not being sceptical. Sure, I may not run into an accident every time I break a signal, but I can, one day. And the more times I don’t run into an accident, does not minimise the probability that I will some day. For the other individual, it probably likely does minimise, or even, eradicate that possibility.

I will continue my rambling another time in another post. For all it is worth, what I have written may come across as nothing more than senseless to someone reading it — it probably is.

Of bubbles and bitter experiences!

When it comes to procrastination (and therefore exaggeration), I stand unparalleled among the many circles I am known to be a part of. It has been a little over a year since my writing about the procedure to follow to disconnect PTCL DSL service and my undeniably firm resolve to sever off the connection, and I have only very recently finally managed to get around to getting my act together.

The exchange building, where I found the DSL office, was anything but a pleasant sight. Dilapidated, the inordinate building worn down through constant neglect over the years recalled similar sights of government offices that I had had the misfortune of being an audience to. Every wall, every floor, every desk and chair, and every roof mounted fan that appeared to be dysfunctional, I could lay my gaze on was layered with dirt and gunge, but what startled me the most was the sight of the two staffers in the small DSL room sitting on chairs that could fall apart any minute and working on two dust-covered computers lying on an old, worn-out desk. My heart sank. I was sweating from the already sizzling weather outside. The room felt hellish. A dusty, half torn portable standing fan was as close as it got to having any hope of relief in the heat. Throughout the ten minutes I had to sit in that room, except for the few moments I spent answering what was asked of me, I kept looking about, reflecting thoroughly.

It is easier if one can phase out and shield oneself from unsavoury, unpleasant circumstances, by maintaining a bubble around oneself. It is an uphill struggle keeping that bubble intact alone, for there are many places and many moments where the frailness of it becomes perceptible.

I digress. The steps I described in the post before remain the same for applying for discontinuation of service, with the exception that I did not have to surrender the equipment as it had been well over a year. Folks from their call centre called me twice the next day to enquire into my reasons for cancelling the connection.

To toss a wrapper out the window

No matter how depressing a picture the newspaper paints every passing day without fail, despite to what alarming heights corruption in the country is reaching, notwithstanding the growing number of people who continue to die like cockroaches every day and fall pray to countless forms of acts of crimes and injustices, there is always a small fire of patriotism glowing somewhere deep within an individual. I know, and I admit, amidst all that has been happening, you can only do so much to make a difference.

It is pitifully sad to find that we live in times where the motivation to do a good deed comes from what if any personal gains can be achieved as a result in the end. This puts the phrase, the means to an end, in a shady light: if the means is ignored for a bit, what constitutes the end? The personal gains which ultimately drive the means to achieve the end, or the ideal end for which the pristine means were acted out?

Patriotism, even if a wee, is too much to ask for in the times we live. I give you that. But being responsible is not something you can opt out of. I staunchly believe that every individual should do their part in being responsible humans. But, then, I am repeatedly told that the ends do not justify the means?

Need they?

Change can only begin from within you. If you can bring yourself to justify a good deed when there is nothing in there for you in the end, that is where you start to become responsible. That is where you start to embrace change. Keeping that wrapper or empty pack of juice in your pocket and disposing off in a waste bin later on, instead of pulling down your window and throwing it out on the road, may not get you anything in return. But what you would be doing is playing your part, as a responsible citizen and good human. And playing your part is after all your moral responsibility.

I’ve long given up on talking this into people’s minds. It is almost completely hopeless. Even if I can get in, I can’t drive the point anywhere except out across the other side. However, I do have found little success in leading by example. By playing my part, and by hinting ever so slightly, I can get people to notice and to think about it. The end is marginally different from before, but even if it isn’t, I could not care less. I merely try to play my part alone, in my capacity and when and where ever I can. If only more people would think alike.

Whingingly yours

People who have worked with me know that I tend to whine a lot. I bitch about the smallest of things—not having a proper desk to work on, a comfortable chair to sit on, a powerful machine to work with, LCD to look at without straining my eyes, copious space to park my car, a peaceful ride to and from work, or even if ridiculously, continuous supply of power to do anything, etc.

I don’t complain for the heck of it, or because I am a disgruntled individual pissed at almost everything in life all the time. I want to be the best and most effective in what I do. To be that, I must somehow be productive. I can give way to productivity only when I am at peace with myself and everything around me to concentrate on the work at hand. And for that to be, I have to not worry about the desk on which I work being an inconvenience because perhaps it isn’t big enough to have half of what I need on it at any given time or wasn’t designed with ergonomics in mind, the chair on which I sit for the most part of the day to cause backache, the machine on which I work my finger to the bone to be sluggish and agonizing for that alone adds to your frustration more than any other single factor, the monitor to cause repeated headaches and eye-aches, circling the area outside round and round to find some place to park my car and finding subsequently that someone bumped your car at or the traffic police morons towed it away from the seemingly cramped up place you found to park it on because of lack of a proper parking spot, getting stuck in rush hour traffic for hours at ends amidst impatient idiot drivers honking and squeezing their cars where a bike won’t fit only to exacerbate the traffic jam thereby causing you to end up with almost no energy to do any work on top of carrying a frustrated mood, sitting in dark with no power, etc.

Did you lose track of what I started with?

Joel, in his article `The Development Abstraction Layer`, has it down an order of a magnitude more aptly and eloquently than I can ever myself.

Revisiting software patents

I wrote about software patents a while ago. Some of the comments I received raised what I found to be important points. Instead of attending them there, I have decided to elaborate on them with a new post.

How do you distinguish between someone stealing your idea and creating and marketing a successful product, and someone thinking up of the same idea themselves and creating and marketing a successful product? I don’t think you can. And what you would do to stop the person who stole your idea from making a fortune out of it, you would also do to the one who didn’t steal your idea but thought up of it themselves.

Ask anyone who has made a fortune in the software industry and they will tell you that ideas are cheap. If you won’t come up with a winning idea and act on it fast, someone else will. Thinking that someone else cannot is plain stupid. In the software industry, stuff like time-to-market, reliability, and feature-set (among others) of a product are much more important than anything else.

A market thrives when there is competition in it. The ferocity of competition ultimately defines how great a market is. In a great market, ideas are cheap. What you make of those ideas and how fast is what matters. You will find countless people out there who have had at some point in their lives excellent ideas, but had neither the expertise nor the resources to build on those ideas or build them in time. In order to beat your competition, you have to act fast. There is no other way to it. And if you are going to sit back and bask in the glory of the fact that you have conceived a great idea that will bring you fortune, someone is going to go ahead, put all earnest effort, build on that idea, market it, and actually make a fortune out of it. The fact that whether they stole your idea or thought up of it at the same time that you did is irrelevant. They captured the market with a brilliant product, and you did not.

The fact that being a patent owner gives you the excuse to openly limit or affect your competition on grounds of patent infringement is perhaps the biggest downside of patents. You would argue about ‘prior art’, and the fact that patents can be challenged, but let’s be honest: legal battles take anywhere from months to years to settle, and cost lots of money. Bigger companies can afford both the time and money, but smaller ventures that have created excellent products and started to make a name for themselves in the market get hurt badly. Small ventures don’t have the money nor the time to bear the brunt of the legal battles, and as such, end up getting either destroyed or damaged immensely.

The process of granting a patent takes so long (often, years) that you would think that a Reexamination of a patent would also take as much time if not more. I may be wrong, but I have not heard of any reexamination process against a patent ever getting anywhere (don’t hesitate to correct me if I am wrong).

That is all I will say for now.