Mercurial: Merging changes from an untracked copy.

One of our project that is tracked using Mercurial and hosted remotely on BitBucket.org presented us with a version control conundrum we had not faced. This post is dedicated to describing the problem and one particular solution of it.

The project in question is being tracked using Mercurial and has a history to it. By history, I refer to a set of change-sets (or commits, if you are a Git user not familiar with the use of change-sets to refer to commits). We had a user, let’s call them User A for the sake of clarity, who downloaded the source of the project at a particular change-set. Let’s call that change-set ‘Change-set X’. Note that User A didn’t clone the project repository and go back to Change-set X. They used the ‘get source’ convenience feature on BitBucket which makes possible downloading only the source of the project as it looks at a particular change-set or the tip of the repository (by tip, I refer to the last change-set recorded by the project). So, this User A got the source on their local system, and made some changes to the source. They unfortunately did all this without actually tracking the source under Mercurial, or any other version contorl system, for that matter.

Let’s introduce User B. User B is another contributor on the project. However, they have a clone of the repository on their local machine, and contribute changes and additions through their tracked working copy. User B committed and pushed several changes since Change-set X. All these changes were visible on the remote repository on BitBucket for the project. Now is when the problem came to be about. Both User A and User B now wanted the changes User A made to their private, untracked copy to be merged and pushed to the remote repository, so that they were available to User B. How do they go about doing it?

The biggest hurdle here was that the copy User A had wasn’t tracked under Mercurial at all. That meant that it had no history, nothing of the sort. It just wasn’t being tracked.

I thought about this. After much thinking, I came to a plan that I thought would be worth giving a go at. The plan was like this. First of all, User A had to initialize a Mercurial repository on the local copy that they had made changes to. This would create a new repository. Next, they had to push the changes upstream to the remote repository on BitBucket. However, when they did that, Mercurial aborted, and complained about the local repository being unrelated to the remote repository. Having looked around for what that really meant, I found out that Mercurial complained in that tone whenever the root or base change-set on two repositories were missing or different. This was indeed the case here. However, reading the help page for the “hg push” command, I noticed the “–force, -f” switch, with a note that said that this could be used to force push to an unrelated repository. For what it’s worth, the forced push worked, in that, it was able to push the changes on to the remote repository. It did mess the history and the change-set time-line a bit, because the change-sets from User’s A copy had a different base and parent than those that were on the tip on the remote repository. As far as User B was concerned, when they pulled the latest changes from the repository, they had two dangling heads to deal with and had to merge. The merge resulted in a lot of unresolved files with marked conflicts. Since I wasn’t familiar with the changes in the project, I didn’t pursue User B (and User A) after this point.

While wondering what could be a better way to handle this situation, I posed the question on the #mercurial IRC channel on FreeNode. User “krigstask” was kind enough to provide a solution. His solution went like this. User A would clone the remote repository on their local machine.

$ hg clone URL cloned-repo

They would then jump back to Change-set X.

$ cd cloned-repo
$ hg update -C change-setX

They would copy recursively the changes from their local, untracked copy of the repo into their working copy.

$ cp -r /path/to/local/untracked/repo/* .

They would go through the diffs to ensure their changes are there.

$ hg diff

They would commit their changes.

$ hg commit

Finally, they would merge their commit with the tip of the repo.

$ hg merge

In hindsight, this is a cleaner approach that actually works. I wondered why it didn’t cross my mind when I was thinking about a solution. Many thanks to “krigstask” for taking the time out to explain this approach to me.

I hope I was able to provide something helpful for my version control readers in particular and everyone else in general.

There is more to life than …

I came across this short but extremely powerful post on the gapingvoid blog. It not only moved me but forced me to question myself, to ask myself what is it that I’ve done in my life. I thought it needed to be reproduced in full here, with proper attribution.

To paraphrase Seneca, the tragedy isn’t that life is short, the tragedy is that we waste so much of it.

The other types of tragedy, the more violent kind, never worry me too much, thankfully. I never lost much sleep, worrying about wars or serial killers or whatever.

But the thought of getting to the end of my life and realizing that I had wasted most of it, that froze my blood.

As it should…

To this day, I remember that day in the summers of 2008 when I decided to pack and seal my laptop inside my cupboard, and board a one-way plane to Quetta to spend a month with relatives away from everything I did at home, far, far away from any form of technology. All I had when I reached the airport were a bag full of clothes, a book, and my cellphone (I decided to carry so that I could keep in touch with parents).

One month later when I returned, I had an incredibly painful realization dawn on me. I felt I had wasted an entire month of my life doing absolutely nothing. I thought of all the ways in which I could’ve spent that month that would’ve meant something meaningful to me, or of all the productive things I could’ve done that would’ve helped me and/or my never ending quest for knowledge and for doing things that matter. I couldn’t come to terms with the fact that I had wasted over thirty days, doing nothing more than resting, reading a book that was purely fiction, and socializing with limited relatives.

People take breaks, vacations, to cut themselves away from their hectic lives in order to refresh themselves, to revitalize themselves, to save themselves from the risk of burning out. When they come back to their lives after such breaks, they feel energized  and ready to again take on the mountains that lay before them.

I took a vacation, I took a break. I took rest, I cut myself away from technology, from the usual things that made up my hectic life. But when I came back, I didn’t feel energized. I didn’t feel revitalized. I felt regret. I felt slow and lethargic. I felt angry at myself for having wasted so much time doing nothing.

To this day, I live with that regret. I understand that regrets are harmful and best thrown away, but that’s one of those things that are easier said than done.

Guide: Deploying web.py on IIS7 using PyISAPIe

I spent the last week travailing away, trying to painfully find a way to deploy a web.py based API on IIS7 using PyISAPIe. As frustrations had begun to mount up, I had nearly decided to give up. Being a die-hard Linux and Mac guy, I despise having to work on Windows. Here I was, not only forced to work on Windows, but to find a solution for a problem that left no leaves unturned in its effort to drive me crazy. As if someone decided all this misery wasn’t quite enough, I had to work with a remote desktop session in order to research, tweak, bang my head, and get things to work. Eventually, I cut through massive frustration and despair, managing to find a satisfactory solution. I almost danced in excitment and relief, letting out all sorts of expletives directed at Windows in general and IIS in particular.

To get back to the important question of deploying a web.py script on IIS7 using PyISAPIe, I will make it such that this guide will list down various steps I took, including snippets of relevant code I changed, to tame the beast. I can only hope that what is below will help a poor, miserable soul looking for help as I did (and found none).

I worked with PyISAPIe because I had successfully deployed multiple Django websites on IIS7 on it. The script in question was going to be a part of another Django website (though acting independently). It only made sense to use PyISAPIe for it as well.

First and foremost, I had to install the web.py module on the system. Having had trouble before with IIS with web.py installed through easy_install, I decided to be safe and installed it from source.. Getting web.py to work with PyISAPIe required a small hack (I notice I may make it sound as though it all came down to me in a dream, but in reality, it took me days to figure it out, and clearly after much anguish and pain). In the file Lib\site-packages\web\wsgi.py lies the following function:

def _is_dev_mode():
    # quick hack to check if the program is running in dev mode.
    if os.environ.has_key('SERVER_SOFTWARE') \
        or os.environ.has_key('PHP_FCGI_CHILDREN') \
        or 'fcgi' in sys.argv or 'fastcgi' in sys.argv \
        or 'mod_wsgi' in sys.argv:
            return False
    return True

In its pristine state, when web.py is imported from a source file through PyISAPIe, an exception is thrown. The exception, while I don’t have the exact message, is about it complaining about sys.argv not having an attribute argv, which reads fishy. Since the function _is_dev_mode() only checks whether web.py is being run in development mode, I thought I didn’t care about it since I wanted everything to run in production mode. I edited the function such that its body would be bypassed, while it returned a False boolean value. It looked like this (the important changes I made are highlighted):

def _is_dev_mode():
    return False
    # quick hack to check if the program is running in dev mode.
    if os.environ.has_key('SERVER_SOFTWARE') \
        or os.environ.has_key('PHP_FCGI_CHILDREN') \
        or 'fcgi' in sys.argv or 'fastcgi' in sys.argv \
        or 'mod_wsgi' in sys.argv:
            return False
    return True

This innocuous little addition did away with the exception.

Next up, I used default Hello World-esque example of web.py found on their site to test the deployment (of course, I went on to use my original API script, which was far too complex to trim down and fit into as an example). I called it code.py (I placed it inside the folder C:\websites\myproject). It looked like this:

  import web
  urls = (
      '/.*', 'hello',
      )
  class hello:
      def GET(self):
          return "Hello, world."
  application = web.application(urls, globals()).wsgifunc()

It was pretty simple. You have to pay particular attention on the call to web.application. I called the wsgifunc() to return a WSGI-compatible function to boot the application. I prefer WSGI.

I set up a website under IIS using the IIS Management Console. Since I was working on a 64-bit server edition of Windows and had chosen to use 32-bit version of Python and all modules, I made sure to enable 32-bit support for the application pool being used for the website. This was important.

I decided to keep the PyISAPIe folder inside the folder where code.py rested. This PyISAPIe folder contained, of import, the PyISAPIe.dll file, and the Http folder. Inside the Http folder, I placed the most important file of all: the Isapi.py. That file could be thought of as the starting point for each request that is made, what glues the Request to the proper Handler and code. I worked with the Examples\WSGI\Isapi.py available as part of PyISAPIe. I tweaked the file to look like this:

from Http.WSGI import RunWSGI
from Http import Env
#from md5 import md5
from hashlib import md5
import imp
import os
import sys
sys.path.append(r"C:\websites\myproject")
from code import application
ScriptHandlers = {
	"/api/": application,
}
def RunScript(Path):
  global ScriptHandlers
  try:
    # attempt to call an already-loaded request function.
    return ScriptHandlers[Path]()
  except KeyError:
    # uses the script path's md5 hash to ensure a unique
    # name - not the best way to do it, but it keeps
    # undesired characters out of the name that will
    # mess up the loading.
    Name = '__'+md5(Path).hexdigest().upper()
    ScriptHandlers[Path] = \
      imp.load_source(Name, Env.SCRIPT_TRANSLATED).Request
    return ScriptHandlers[Path]()
# URL prefixes to map to the roots of each application.
Apps = {
  "/api/" : lambda P: RunWSGI(application),
}
# The main request handler.
def Request():
  # Might be better to do some caching here?
  Name = Env.SCRIPT_NAME
  # Apps might be better off as a tuple-of-tuples,
  # but for the sake of representation I leave it
  # as a dict.
  for App, Handler in Apps.items():
    if Name.startswith(App):
      return Handler(Name)
  # Cause 500 error: there should be a 404 handler, eh?
  raise Exception, "Handler not found."

The important bits to note in the above code are the following:

  • I import application from my code module. I set the PATH to include the directory in which the file code.py is so that the import statement does not complain. (I’ve to admit that the idea of import application and feeding it into RunWSGI came to while I was in the loo.)
  • I defined a script handler which matches the URL prefix I want to associate with my web.py script. (In hindsight, this isn’t necessary, as the RunScript() is not being used in this example).
  • In the Apps dictionary, I again route the URL prefix to the lambda function which actually calls the `RunWSGI` function and feeds it application.
  • I also imported the md5 function from the hashlib module instead of the md5 module as originally defined in the file. This was because Python complained about md5 module being deprecated and suggested instead of use hashlib.

And that’s pretty much it. It worked. I couldn’t believe what I saw on the browser in front of me. I danced around my room (while hurling all kinds of expletives).

There’s a caveat though. If you have specific URLs in your web.py script, as I did in my API script, you will have to modify each of those URLs are add the /api/ prefix to them (or whatever URL prefix you set in the Isapi.py. Without that, web.py will not match any URLs in the file.

What a nightmare! I hope this guide serves to help others.

Thank you for reading. Good bye!

PS: If you want to avoid using PyISAPIe, there is a simpler way of deploying web.py on IIS. It is documented crudely over here.

Getting back inactive memory on Mac.

     OS X has the habit of keeping recently closed applications in memory so that if they are run again, they load quickly. The part of physical memory used for this purpose is called “Inactive memory”. The “System Memory” tab on the Activity Monitor application gives a break-down of the physical memory, including available free and inactive memory. Because of the way OS X behaves, you may or may not notice your system running low on “free” memory every now and then. This discovery could perplex you, because despite being low on free memory, you can load applications and go about doing your work. This is possible because inactive memory can be released by the OS X kernel’s memory management subsystem on demand. If it finds that the system is running short on free memory, and the user has started an application that is not already loaded in inactive memory, it will gladly comply and release enough of inactive memory to be able to run the requested application.

     I recently found a command line utility on OS X to release most of inactive memory. It is called, “purge”. The short description for “purge”, from its man page, states that its use forces the disk cache to be purged. The “disk cache” actually refers to “inactive memory”. To run this command, you have to type “purge” on Terminal.app (or any other Terminal application that you use). For example:

(Ayaz@mbp) [0] [~]
$ purge

     Before running the purge command, the memory breakdown on my system looked like:

     After the purge command ran, inactive memory went from 858MB down to 270MB.

     You will notice that the system becomes a little unresponsive while purge is flushing the disk cache. That’s fine and nothing to worry about.

     If you can’t find purge on your system, it could be because you have not installed XCode and accompanying development tools. These are available in one of the OS X installation discs. You can now also pay and download XCode from the Mac App Store.

     Have fun and be nice!

Get all public interface IPs on a system using Python

     I recently came across a requirement in a project where I had to, in Python, programmatically extract all available public IPs on available interfaces on the machine the code would run. I looked around and settled with the following snippet of code that uses the built-in, standard socket Python module:

import socket
ip_list = [ip for ip in socket.gethostbyname_ex(socket.gethostname())[2] if not ip.startswith("127.")]

     While this piece of code does find a public IP listening on any of the available interfaces, its restriction lies in not being able to return all public IPs on interfaces: It gives back just one IP.

     This wasn’t clearly sufficient. I looked around again, and this time, found a third-party Python module called pynetinfo. This module could make possible working with different network device settings.

     I rearranged the code around pynetinfo and produced this:

def get_inet_ips():
  try:
    import netinfo
  except ImportError:
    return None
  else:
    inetIPs = []
    for interface in netinfo.list_active_devs():
      if not interface.startswith('lo'):
        ip = netinfo.get_ip(interface)
        inetIPs.append(ip)
    return inetIPs

     The code above loops through all available and active interfaces on the system, fetching and storing their IP in a simple datastructure. That got me all of the IPs available to a machine, excluding the loopback one, which the code was set to discard.

     But that wasn’t it. There was a slight problem. Not all the active interfaces on the system had public IPs. Some had private, local LAN IPs in the 192.168.0.0/16 and 10.0.0.0/8 subnets. The code above was returning all the IPs it could find, including public and private ones.

     I then found the netaddr third-party Python module which provided a Pythonic means of manipulating network addresses. I modified my code to use the netaddr module and got the following to boot with:

def get_inet_ips():
  try:
    import netinfo
    from netaddr import IPAddress, AddrFormatError
  except ImportError:
    return None
  else:
    inetIPs = []
    for interface in netinfo.list_active_devs():
      if not interface.startswith('lo'):
        ip = netinfo.get_ip(interface)
        try:
          ip_address = IPAddress(ip)
        except AddrFormatError:
          continue
        else:
          # If the IP is not private, use it.
          if not ip_address.is_private():
            inetIPs.append(ip)
    return inetIPs

     The netaddr.IPAddress.is_private() method in the code above determines whether the given IP is part of any of the defined private networks.

     Admittedly, there is much room for improvement in the code above. I can only hope that if it doesn’t help, then at the very least it serves as an interesting read.

Converting a Git repo to Mercurial

     Until today, we had most of the projects on Mercurial repositories hosted on a Windows box in-house. The Mercurial web interface was set up to provide a convenient read-only access to both contributors and non-contributors within. However, the box in question was unreliable, and so was the Internet link it was alive on. And, in this world, there are few things more troublesome than having your source control server unavailable due to an unexpected downtime.

     What’s special about today is that we moved most of our Mercurial repositories to BitBucket.org. While they will remain private, the contributors as well as the internal non-contributors will be given access to them. Aside from having the repositories on a central server that we can mostly always rely on, having a lovely Web interface to browse and control the repositories, we also get useful goodies like issue trackers, wiki pages, and an easy user management interface. It is a win-win situation for everyone.

     One of the projects I was working on had only been on a local Git repository. I started work on it at a time when I found myself deeply in love with Git. Since BitBucket is a Mercurial repository warehouse, I had to find a way to convert or migrate the Git repository into a Mercurial one.

     I looked around on the Web and found a lot of people recommending the use of the HgGit plugin. As I understood it, this plugin made possible, among other things, the workflow that involved working on a Git repository and pushing the changesets to a Mercurial counterpart. However, the process of setting it up seemed rather complex to me. Plus, I didn’t want to keep the Git repository lying around when I was done with the migration. I wanted to be able to migrate the Git repository to a Mercurial one, push it upstream to BitBucket and make any changes in future in the source code by cloning from the Mercurial repository. What HgGit did seemed rather overkill for my needs.

     I then discovered the Mercurial ConvertExtension. This extension did just what I wanted: Convert repositories from a handful different SCMs into Mercurial. The process of converting a Git (or any other repository) to a Mercurial one through ConvertExtension is very straightforward.

     As a first step, you are required to edit your global .hgrc file to enable the extension thus:


[extensions]
hgext.convert=

     You are then required to run the hg convert command on your target Git repository thus:


$ hg convert /path/to/git/repo

     This will migrate the Git repository, and store it in the current directory inside a directory named repo-hg. Once inside the newly created Mercurial repository (and this part is important), you have to run the following command to checkout all the changesets:


$ hg checkout

     You may then push the repository with the usual hg push to BitBucket.

     :)

     PS: This blog post really helped get me going in the right direction.

Browsing on the BlackBerry simulator

     I am not a big fan of BlackBerry smart-phones. I realize that there are a lot of people who can’t seemingly exist without access to their emails virtually all the time, and for those lot, BlackBerry, with its prominent push email feature, is perhaps a better fit than any other smart-phone platforms out there. When it comes to me and my smart-phone usage, I would not go so far as to say that I can’t live without my phone. I can. By every measure, I consider myself a hardcore geek, perhaps more hardcore than most others, but I am by no means a gadget freak. While it will be unfair to say that I absolutely abhor typing on small (semi-) keyboards, I don’t also quite enjoy the experience. When it comes down to typing, I would much rather prefer a full-fledged keyboard. That is why to me a compact laptop is many times more important than a fully equipped smart-phone. (For the curious reader, I own a Nokia E72.)

     For a recent mobile website project that I worked on, I had to face a complaint from the client where the layout of certain pages on the site didn’t look quite as expected on BlackBerry devices. Naturally, I didn’t have a BlackBerry handset nor an easy equivalent to test the issue for myself, so I did what anyone stuck in the same corner as I would do: I went over the BlackBerry developer portal online to look for BlackBerry simulators.

     Unlike the Epoch simulator for Symbian/Nokia phones and the iPhone simulator, the BlackBerry simulators were spread out such that for each possible BlackBerry smart-phone model in existence, there was a simulator available for it. And each one of the download was anywhere from 50 to 150 MB in size.

     I chose the simulator for one of the latest BlackBerry handsets, and downloaded it. Like the Epoch simulator, BlackBerry simulators are Windows-specific, in that, they are available in the form of Windows executable binaries. I didn’t have Windows anywhere in my study, so I had to set up a Windows guest inside VMware Fusion in order to set up the simulator. To cut a long, painful story short, I was able to install the simulator after tirelessly downloading a big Java SDK update, without which the installation wouldn’t continue. And then, I powered up the simulator. I was instantly reminded of the never-ending pain I had to suffer through the hands of the Epoch simulator in my previous life where I used to be a Symbian developer. The BlackBerry simulator took ages to start up. I almost cursed out loud because that fact alone opened up old, deep gashes that I had thought I had patched up for good. I was mistaken. Never in my dreams had I thought of having to deal with such monstrosity ever again. And, to my utter, absolute dismay, here I was.

     Eventually, after what seemed to have been ages since I booted up the simulator, I was able to navigate my way to the BlackBerry browser. I let out a deep sigh and thought that I could now finally concentrate on the problem I set out to tackle. But, no! I couldn’t browse on the BlackBerry browser at all. No amount of fiddling with the 3G and WiFi settings all over the BlackBerry OS got browsing working. From what I could tell, both the 3G and WiFi networks were alive, but there was no traffic flowing through. I almost gave up.

     After groping on the Internet with a wince on my face, I was finally able to find out why. Apparently, by default, the BlackBerry simulator are unable to simulate network connections. In order to do this, you have to download and install an additional, supplementary BlackBerry simulator that is called the BlackBerry MDS Simulator. Once this simulator is up and running, your actual BlackBerry simulator will be able to simulate network connections, browse, and do all sorts of network related functions. Who knew!

     As an aside, there’s also the BlackBerry Email Simulator that simulates messaging functionality.

Front Row glitch with Apple external keyboard

     MacOS has this application called Front Row. When activated, it takes over the entire screen, displaying what is known as the “10-foot user interface” to allow the user to kick back, relax, and watch and listen to videos, podcasts, music, or media content of any kind. If you’ve got a big screen, such as an Apple TV or an Apple cinema display (which if I may add are crazy expensive), with the aid of the Front Row application, you can enjoy a great media centre-esque experience.

     On MacOS, this application, Front Row, is tied to a keyboard shortcut: Command + Escape. Once activated, you can use the arrows keys to navigate your way through the 10-foot interface, selecting whatever media content you may want to look at. A really convenient feature of the Front Row application is its integration with the Apple remote. Essentially, you can sit back and navigate the media centre through nothing but the wireless remote.

     I’ve owned a MacBook for over two years now. Having used the keyboard on the MacBook for nearly all this time, I now find that I can barely type on it without causing myself a great deal of agitation. I’m a touch typist, and naturally when I cannot type both fast and accurate, and when I know for a fact that I can’t not because I don’t have the capacity to do so, but because the keyboard is being the bottleneck and standing in the way, I get frustrated very easily. This unfortunately happens to be the case with the MacBook keyboard now. In order to work around that temporarily, I recently dived without a clear head into a shopping spree and emptied my wallet into buying the Apple extended external keyboard. While it is not really conducive to touch typing (something I find it appropriate to elaborate on in a different article altogether), I am able to get by and get my work done without getting close to having a nervous breakdown.

     Now, here, I should point out that I don’t have substantial evidence to prove this (and to that end, I am groping around for it), but I suspect that the Front Row application and the external Apple keyboard don’t quite play nicely together. I am not a very media-centric person, in that, I am not altogether fond of watching movies so often as many of my friends do, for example, so I have little to no use of the Front Row application. However, since I do use all sorts of keyboard shortcuts to perform different functions across many esoteric applications (including Vim, use of which puts a lot of emphasis on the use of the Escape key), I somehow end up pressing the wrong key combination and activating Front Row unwittingly. But, that’s fine, you may say, because all I then have to do is close Front Row. Yes, well, sort of. My problem is that, with the external keyboard attached, if I accidentally start up Front Row and let it take over the screen, I am unable to exit it. And because the application itself does not have any UI elements that can be clicked at to command the application to quit and because the only keyboard shortcut that happens to be the only known (to me) means of exiting the application stops working in the presence of an external Apple keyboard, because of all that, I get a big problem in my hands. I get stuck with an application that essentially takes over my computer, and I can’t do anything about it. I’ve tried waiting for the screen saver to kick in, hoping that when I make the screen saver go away, I would get control back of my system; I’ve also tried letting the MacBook sleep and wake-up subsequently, but all in vain. The Front Row application simply doesn’t go away, until I am left with no other option but to forcefully reboot the MacBook, losing all my data inside all my running applications, which, I should mention, are functioning normally behind the nauseating Front Row screen.

     I’ve had this happen inadvertently far too many times for me to continue to ignore it. So, I eventually did the first thing I could think up of: Find a way to disable the keyboard shortcut. It turned out to be pretty easy to do. Most of the common keyboard shortcuts, as it turns out, linking to different common applications and actions are defined under the Keyboard Shortcuts section in Keyboard inside Preferences. The shortcut for the bloody Front Row is somewhere in there.

     I am pretty sure there’s something amiss with Front Row and its associated keyboard shortcuts when an external keyboard is attached, but as I said before, I’ve nothing to substantiate that claim. Right now, I am happy and relieved to be able to disable the shortcut.

Ranting about Mercurial (hg) (part 1)

     The day I got a grip of git, I fell in love with it. Yes, I am talking about the famous distributed version control application: git. I was a happy Subversion (svn) user before discovering git. The day to day workflow with git and the way git worked made me hate Subversion. I was totally head over heels in love with git.

     I have come to a point where the first thing I do when I am about to start a new project is to start a git repository for that project. Why? Because it is so easy and conveneient to do. I really like the git workflow and the little tools that come with it. I know that people who complain about git often lament about its lack of good, proper GUI front-ends. While the state of GUI front-ends for git has improved incredibly and drastically over the years, I consider myself a happy command-line user of git. All my interactions with git happen on the command-line. And, I will admit, it does not slow me down or hinder my progress or limit me in any way in what I do. I have managed fairly large projects on git from the command-line, and let me tell you, if you are not afraid of working on the command-line, it is extremely manageable. However, I understand that a lot of people run away from the command-line as if it were the plague. For those people, there are some really good git GUI front-ends available.

     I have to admit that for someone coming from no version control background or centralized version control background (such as Subversion or CVS), the learning curve to git could turn out to be a litle steep. I know that it was for me, especially when it came to getting to grips with concepts involving pushing, pulling, rebasing, merging, etc. I don’t think I will be off-base here to think that a good number of people who use version control software for fairly basic tasks are afraid of merge conflicts in code when collaborating code. I know I was always afraid of it, until I actually finally faced a merge conflict for the first time. That was the last time I felt afraid of it, because I was surprised to find out how easy it was to deal with and (possibly) resolve merge conflicts. There’s no rocket science to it.

     I have been plenty impressed with git that I wince when I am forced to use any other version control. In fact, I refuse to use Subversion, as an example, and try my best to convince people to move to git. Why? Because git is just that good. I should probably point the reader to this lovely article that puts down severals reasons that explain why git is better that a number of other popular version control applications out there.

     Recently, at work, I have had to deal with the beast they call Mercurial (hg). Yes, it is another famous distributed version control system out there. I was furious at the decision of my peers to select Mercurial for work-related use, and tried my best to convince them to use git instead, but they didn’t give in to my attempts to persaude them. I had no choice but to unwilling embrace the beast and work with it.

     Instantly, I had big gripes with the way Mercurial worked as well as its general workflow. Where I fell in love with how git tracked content and not files, I hated the fact that Mercurial only tracked files. What I mean by that is if you add a file in a Mercurial repository to be tracked and commit it, and then make any changes to the file, Mercurial would automatically add it to the staging commit. Yes, it wouldn’t let you specify manually which files that were changed should be part of the staging commit, something that git does. A staging commit is a staging area where files that have been changed and have been selected to be part of the next commit are loaded to be ready for commit. With git, you have to manually specify which file to add to the staging commit. And I loved doing it that way. With Mercurial, this was done automatically. This in turn brings me to the second big gripe I had with Mercurial: incremental commits. Git workflow places an indirect emphasis on incremental commits. What are incremental commits? In the version control world, it is considered best practice to perform a commit that is as small as possible and that does not break any existing functionality. So, if for example, you have made changes to ten files to add five different functionalities, that may be independent of each other, but would only like to commit one functionality at a time. That is where incremental commits come in action. With incremental commits, you specify only the files you want to be committed and commit them, instead of committing all the changed files in a bunch and add a general purpose commit message. Incremental commits is a big win for me, and I adore that feature. Because with git you had to manually specify which of the changed files you wanted in the staging commit area, incremental commits came easily and naturally. With Mercurial, where it automatically added all changes to tracked files into the staging commit area, it did not.

     So I asked the obvious question: How do we do incremental commits with Mercurial? Aftering sifting through the logs, I found out that it was indeed possible, but in a very ugly way. The Mercurial commit command takes an extra parameter, -I, that can be used to specify which files explicity to make part of the commit. But that wasn’t it. If you had, say, five files of the ten changed files you wanted to commit, you would have to tag the -I switch behind each of those files, otherwise, Mercurial will fail to include the files in the commit. To this day, this little quirk bites me every now and then.

     I will admit that the one thing I really liked about Mercurial was the built-in web server it came with that could provide a web-based interface to your Mercurial repositories for viewing. This was something that was missing from or quite difficult to do with git. And, this was also one of the biggest reasons why my peers at work decided to go with Mercurial for. They all wanted a convenient way to access the repositories over the web.

To be continued.

Jagged fonts on Snow Leopard on LCD over DVI

     I have a 13″ white MacBook. It has a glossy display that is about the best I have ever used in my life. Great quality, lovely fonts. The only drawback with it is a drawback with glossy displays: it reflects ambiant light which can at times cause problems. But I really don’t mind that. I can work around it easily. Overall, I am really happy with the display on the MacBook.

     The only problem is, 13″ is just not enough. Not all the time. But when you are managing many things side by side, and not least when writing code, you can really get frustrated from not having enough space on the screen. After all, it doesn’t help that you’ve windows hiding behind your current window in focus because there is no space left anywhere on the screen to toss two or three windows side-by-side and still be able to get anything done.

     So, a year ago, I purchased, after careful thought and research, a ViewSonic VX1940w external LCD. The reason I settled on this one, and not any other, is a combination of: a) great price I was getting on it; b) excellent resolution it was offering at that price; c) it sported both VGA and DVI inputs. I really, really wanted an LCD with a DVI input. If you haven’t clicked on the LCD catalog page, this LCD sports a max resolution of 1680×1050 and is 19″ in size and wide-screen. It has a matte display, in that, it does not have the reflection problem the glossy displays suffer from. But it isn’t nearly so crisp in display quality as the glossy.

     If you’ve had the occasion to use any Mac laptop, you’ll know that they don’t have the de facto VGA and DVI input/output ports. Instead they can have any one of mini-DVI, mini-VGA, mini-DisplayPort, or micro-DVI/VGA ports, which by the way are all Apple proprietary stuff. So, how do you connect almost all of the LCDs out there that come with standard VGA and DVI ports and connector wires? You buy VGA and/or DVI adpaters from Apple. These, I should mention, are anything but cheap.

     Naturally, I wanted to buy the DVI adapter for my MacBook. However, I did some research and found bad news in that, a lot of people on the Mac forums reported their LCDs not working with the DVI adapters Apple provides because of the latter being either DVI-I or DVI-D, and most LCDs not supporting one or the other. That scared me. The DVI adapter was expensive to being with, I was not sure whether it would really work with my LCD. So, I took the safe lane out and bought the VGA adapter. And it worked out of the box.

     But, there were problems. These were the sort of problems I documented in a question titled, “Why do I feel a strain on my eyes when I look at the 19″ matte display?“, that I asked on the Super User forum. You may want to skim through it to get the excruciating details of the problems I faced with the VGA display on the LCD. The bottom-line was that that I was told that I had to get the DVI adapter instead, and that because of the jagged fonts on the VGA display and generally sub-standard picture quality, my eyes were having a hard time adjusting, resulting in strain and headache. Initially, it didn’t make sense to me how that could really be the cause of the problem, and didn’t buy it. But, the fact remained that I couldn’t use the LCD for long without walking off with an incredible burden weighing down on my eyes.

     Finally, I bit the bullet today and bought a mini-DVI to DVI adapter. Having plugged it in, I noticed that the “auto image adjust” functionality on the LCD was disabled. It wasn’t when the VGA was plugged in. Despite changing a couple of settings that I thought might make a difference, I felt that the fonts looked even more jagged, the picture quality worse than before. Having spent some more time staring on the screen, opening up Terminals and editors and windows, I realized it wasn’t merely a feeling. The fonts and the display did look much worse than before. I freaked out. I didn’t know what to do. I rebooted the MacBook in the hopes that when I plug in the LCD again, I will actually surprise myself. To my utter dismay, nothing changed. I was sorely disappointed, and I didn’t know what to do. A feeling of remorse hit me for having spent a fortune on the DVI adapter only to get a crappier picture than I did on VGA.

     With a heavy heart, I started looking around. I skimmed with grief through a couple of similar posts on the forums about how fonts on DVI input on LCD looked jagged, and such. Nothing suggested really helped. And then, I found this blog post, Snow Leopard Font Smoothing. It talked about the exact same problem I was having. And what’s more, it suggested a fix for the problem. The fix was to toggle a global setting on OS X by running the following command on the Terminal.app:

defaults -currentHost write -globalDomain AppleFontSmoothing -int 2

     I did this, but nothing worked. I wasn’t really sure what else I had to do to get this to work, neither did the blog post hinted at anything beyond running that command. After a while, I was convinced that there was no hope for me in this. I had nearly given up when I noticed the fonts on the text on the menu bar. It looked different. It was crisper, better, more beautiful. And then it hit me. I quickly, first of all, quit my Terminal.app, and when I re-opened it, voila! There they were. The lovely fonts I had fallen in love with on the MacBook screen. I restarted one by one most of the applications. Apparently, what little helpful bit that blog post missed out on was that you had to close all your applications and re-run them for the settings to take effect. I imagine an easier way is to simply log off and log back in.

     I am absolutely ecstatic with all this. Not only can I use my big LCD to manage my work in a better and more productive manner, I don’t have to walk away with unbearable headaches and eye-strain. Well, I still have headache and eye-strain, to be honest, but that’s an every-day thing. I am so happy.

Using mouse inside Vim on Terminal.app

     When I am writing code, I spend most of my time inside Vim on Terminal.app on MacOS. When I am not writing code, I still end up spending a good bit of my time on Terminal.app, running all sorts of commands, using command-line applications (such as irssi for IRC), and editing files in Vim here and there.

     Coming from Linux, my biggest gripe with Vim on Terminal.app has been the fact that I could not scroll through the Vim window using the wheel on my mouse as I would any normal editor. If I tried to scroll up or down, Terminal.app would scroll through the Terminal session instead and mess everything up. For a long time, I had to contend myself with holding down the Fn and Shift keys and using the Up and Down arrows to scroll back and forth. Anyone who has done this can immediately imagine how annoying this can get.

     So, the other day I had a thought cross my mind and I asked myself, “Vim can’t really be that lame and not support the functionality of scrolling with the mouse properly. I am simply missing out on something.” I looked around, and sure enough, I found that Vim does support all sorts of mouse-related operations on the shell, including properly scrolling up and down. This particular feature of Vim is toggled via the “:set mouse=a” setting. You could go through the help menu, with “:help mouse” inside a Vim session, to see what other values, apart from “a”, that setting can take. But for all intents and purposes, the “a” setting is sufficient.

     I was happy Vim could do that. But, when I toggled it in Vim on Terminal.app, it didn’t work. Scrolling, as before, would only scroll through the Terminal.app buffer and mess up things. I was disappointed.

     Sifting through a couple more threads here and there on the web about Vim, scroll, and Terminal.app, I was able to figure out the reason why, after enabling mouse support on Vim, it didn’t work on Terminal.app. As it turns out, Terminal.app eats away all the mouse gestures thrown at it, and does not delegate them to Vim or other applications running within the shell. It doesn’t let them through. Vim wasn’t getting any mouse gestures at all. This was a problem. I thought I had bumped into a big, thick wall that could not be crossed.

     But, I was wrong. Luckily, I somehow managed to find a plugin, by the name of Mouse Term, for Terminal.app that patched Terminal.app’s behaviour of masking mouse gestures and not letting them through to individual applications. Mouse Term, as is mentioned on its home page, requires another, small application, SIMBL, to be installed first to work. In fact, Mouse Term is a SIMBL plugin.

     Once I had SIMBL and then Mouse Term installed and Terminal.app loaded, I could actually scroll through Vim, with the “:set mouse=a” setting, prefectly. Not only that, I noticed something else as well that made me so happy. I use tabs in Vim. Vim makes available half a dozen Vim commands, that start with the prefex “:tab”, to manipulate tabs, navigate through them, etc. If you’ve ever used tabs seriously on Vim, and used these commands to cycle through tabs, for example, you’d understand how frustrating it can get. But, guess what my excitement was about? Not only could I now scroll through a Vim window smoothly, I could also click on the tabs to switch to them. I could also click on the “X” on the top right corner to close a tab. It simply worked.

     The best part is when you have the NERD Tree Vim plugin installed. You can, inside the NERD Tree window, expand and collapse directories and open files simply at the click or double click of the mouse. As a Vim user, what more could you want?

Tempting Firefox plugins for the mischievous minded

     I thought I would make a quick mention of a web page I came across that introduces really useful third-party Firefox plugins which a person such as myself who gets involved in hanky-panky from time to time can make really good use of. I know I have relied on Firebug on far too many occasions to mention to slice and dice and hack away at the source of live web pages, and on Live HTTP headers slightly less so for inspecting the crud that goes back and forth under the hood while browsing. But the page lists other more tempting plugins that could make your life a lot easier if you swing that way. :)

Pakistan Summer Time, and NTP on OS X

     I noticed today that, after a major update to OS X along with a security update, the time on the system clock was an hour ahead. In fact, I didn’t pick it up until after I had glanced at the time on my cell phone. When I opened the preferences where different settings related to time and date can be set, I realised that the Network Time Protocol (NTP) had been enabled, which meant that the system was syncing time and date, along with the usual time zone information, from a remote network time server. In my case, that server was time.asia.apple.com, one of three servers in the drop-down list of NTP servers in the preferences to choose from.

     As with the other two, time.asia.apple.com is an NTP server that is managed by Apple themselves. If you travel a lot, or if you are mindful of and in a place where daylight savings time is commonplace, being able to use an NTP server to not worry about having to change time and date is ideal. It is convenient. After all, time is important, and keeping track of time more so.

     Now, I love NTP. It sure beats having to change time manually all the time. But, what if the NTP server you so dearly depend on suddenly starts spewing out incorrect time? Well, you’d eventually notice that, yes, but it would be annoying. The emails you send are suddenly ahead of time, the IM messages you receive as well, your calendar events, etc. If the difference in time due to the error is subtle, say, maybe off by an hour or so, you will likely take longer to spot it. Not that your house will burn down, or your business will plummet in a downward spiral into loss, but it sure will cause problems, even if little, annoying ones.

     So, why am I here on a hot Saturday afternoon with no mains power, talking about all this? Because I found out today that time.asia.apple.com is giving out a time for Pakistan that is +6 GMT, when it should correctly be +5 GMT. Judging from the label “Pakistan Summer Time” that the NTP server is using to describe the time, I can understand where this skew in time is creeping in from. But it is wrong. And the time on my system is wrong. What’s worse is that the place in system preferences where date and time settings are, does not provide an option for me to use a custom NTP server of my own choosing. I am restricted to choosing from the drop-down of three NTP servers, only one of which applies to my time zone. Bugger!

     Until I found /etc/ntp.conf This small text file stores the address of an NTP server to use. Regardless of whether you have NTP time enabled in the preferences pane, you will have an existing entry in the file. If you change the address in there to point to something, say, asia.pool.ntp.org, the system will use the new NTP server. In the preferences, the NTP server you added will automatically be selected for you, even though, if you pull the drop-down, you won’t notice it in the choices available.

     The only problem is, asia.pool.ntp.org also has Pakistan time pinned down at +6 GMT. Square one!

The engineer I met, and lost.

     My brother has a desktop PC, a P-IV, which runs Windows XP. One day his system began exhibiting random freezes. He took it to a computer shop nearby where it was diagnosed that the power supply on the chassis had gone south. The replacement supply, costing around 600 PKR including labour, worked well, until after a few months, the same problem resurfaced.

     Suspecting the RAM this time, I ran the memtest application, that runs all sorts of tests on the RAM sticks installed on the system, found on Ubuntu CDs, leaving the application running for hours. Alas, it detected a couple of erroneous memory positions on one of two RAM sticks on board, not really indicating which one. Not having any additional RAM sticks lying around, I could not test to make certain that the problem was indeed without a doubt the faulty stick and which one. I sat down to search for replacement sticks to buy, shocked at the prices of the rather older type of RAM the mother board on the computer could live with.

     As if out of luck completely, before ordering an expensive pair of sticks, I decided on whim to take the computer to the same shop that had replaced its power supply. And as I would realise, I was very lucky I did that.

     If a computer is booted up with no RAM sticks attached, the system, on POST, beeps twice. This combination beep is a signal that the system failed to find any physical memory on the system. And it is a good thing. If the system, for any reason, does not beep with no physical memory attached, then that is an indication something is wrong somewhere on the mother-board.

     That’s what happened at the computer shop when the engineer there tried to start the system without the RAM sticks. When he told me what I hoped not to hear, that there was something amiss with the mother-board, I assumed almost instantly, to my dismay, that there was no way around it but to buy a new mother-board. I think I must have thought out loud, because, to my most pleasant surprise, the guy debunked my assumption, asking me to leave the system at the shop for him to diagnose the problem at leisure. I complied, not having any choice.

     Four hours passed, and he called me to tell me that a couple of ICs on the board around the area where the RAM slots were had burnt out, and that he could fix them for a meagre 400 PKR. Overly ecstatic at the prospect of not having to buy a new mother-board, I got him to reaffirm to me that that would indeed, completely, certainly, fix the problem straight-up.

     When I went to pick up the system, the guy was busy fixing the board in one congested corner of the shop. He had a small desk cluttered with boards, ICs, RAM sticks, and solder dropped from the soldering iron all over. He was perched on his desk, immaculately using a type of soldering equipment I had not seen before: it was a soldering gun, that blew hot air, much like a woman’s hair blower. If you’ve ever used a needle-pin soldering iron, you will quickly understand how difficult it is to use a gun that blows hot air to melt the solder. On a mother board with many, many ICs soldered close to each other, having a gun blowing hot air all over is problematic. But he did it as if it was an every day routine for him. I was impressed. And I was happy.

     He was busy tending to customers, and I couldn’t get him to steal some time to have a chat. I was, though, able to find out that he was an engineering student at a local engineering university in the area, and worked part-time at the shop. He gave me his card, telling me with a smile on his face that I could contact him any time I had a problem. I thanked him profoundly and left.

     A month ago, I was deeply saddened to find out that the shop I had been to was no more there, replaced by another, different shop. And the card he had given me, I was unfortunate and clumsy enough to have misplaced.

     Oh, well.

VMware Fusion: The network bridge on device /dev/vment0 is not running

     That seems to be a common problem that users of VMware Fusion suffer from. I had to face it recently. I use VMware Fusion on Snow Leopard (OS X) to run Slackware Linux. My MacBook, on which I am running Snow Leopard, is connected to a WiFi hotspot via AirPort. I have VMware Fusion set up to use the network in bridged mode.

     A few days back, after having rebooted OS X (I reboot it what appears to be once every month), I started getting “The network bridge on device /dev/vmnet0 is not running” error on booting the Slackware VM. The network remained disconnected, with any attempts to connect it manually ending in that error. That meant that I could not get the VM on the network and use it (I tend to remotely utilise the VM over secure shell).

     Quitting and re-running VMware Fusion went in vain. It wasn’t until I searched for a possible cause of the problem and solution that I found one that worked for me. This particular solution requires manually restarting a bootstrap BASH script that is bundled with VMware Fusion. This script takes care of setting up the pertinent network interfaces and their modes, which are subsequently used by VMs.

     To be specific, that particular bootstrap script is found under the following folder:

/Library/Application Support/VMware Fusion/

     The name of script is “boot.sh“, and requires it be executed as root. So, all in all, what needs to be done is running this command from the shell:

$ sudo /Library/Application\ Support/VMware\ Fusion/boot.sh --restart

     I found this tip mentioned in one of the responses in the discussion here.