How to write MCollective Agents to run actions in background

The Marionette Collective, or MCollective in short, is a cutting-edge tech for running system administration tasks and jobs in parallel against a cluster of servers. When the number of servers you have to manage grows, the task of managing them, including keeping the OS and packages installed on them updated, becomes without a doubt a nightmare. MCollective helps you drag yourself out of that nightmare and into a jolly dream, where you are the king, and at your disposal is a powerful tool, by merely wielding which you can control all of your servers in one go. I’ll probably do a bad job of painting MCollective in good light, so I’d recommend you read all about it on MCollective’s main website.

Like every good tool worth its while, MCollective gives you the power to extend it by writing what it calls agents to run custom code on your servers to perform any kind of job. You can read about agents here. MCollective is all in Ruby, so if you know Ruby, which is a pretty little programming language by the way, you can take full advantage of MCollective. Incidentally, a part of my day job revolves around writing MCollective agents to automate all sorts of jobs you can think of performing on a server.

For a while I have been perplexed at the lack of support for being able to run agents in the background. Not every job takes milliseconds to finish itself. Most average-level jobs, in terms of what they have to do, take anywhere from seconds to, even longer, minutes. And since I write an API which uses MCollective agents to execute jobs, I often run into the problem of having the API block while the agent is taking its sweet time to run. As far as I’ve looked, I haven’t found support within MCollective for asynchronously running actions.

So, I got down to thinking, and came up with a solution, which you could call a bit of a hack. But insofar as my experience testing it has been, I’ve yet to face any issues with it.

I’ve an example hosted on GitHub. It’s very straightforward, even if crude. The code is self explanatory. At the heart of it is creating your actions in agents to fork and run as childs, without having the parent wait for the childs to reap, and having one extra action for each agent to fetch the status of the other actions. So, essentially, the approach has to be taken for every agent you have, but only for those actions which you wish to run asynchronously. With agents in place, all you’ll need to do is call the agents thus:

$ mco rpc bg run_bg -I node -j
{"status": 0, "result": "2434"}

and repeatedly fetch the status of the async action thus:

$ mco rpc bg status pid=2434 operation=run_bug -I node -j
{"status": 0, "result": ""}

It’s a solution that works. I’ve tested it over a month and a half now over many servers without any issues. I would like to see people play with it. The code is full of comments which explain how to do what. But if you have any questions, I’d love to entertain them

Mercurial: Merging changes from an untracked copy.

One of our project that is tracked using Mercurial and hosted remotely on presented us with a version control conundrum we had not faced. This post is dedicated to describing the problem and one particular solution of it.

The project in question is being tracked using Mercurial and has a history to it. By history, I refer to a set of change-sets (or commits, if you are a Git user not familiar with the use of change-sets to refer to commits). We had a user, let’s call them User A for the sake of clarity, who downloaded the source of the project at a particular change-set. Let’s call that change-set ‘Change-set X’. Note that User A didn’t clone the project repository and go back to Change-set X. They used the ‘get source’ convenience feature on BitBucket which makes possible downloading only the source of the project as it looks at a particular change-set or the tip of the repository (by tip, I refer to the last change-set recorded by the project). So, this User A got the source on their local system, and made some changes to the source. They unfortunately did all this without actually tracking the source under Mercurial, or any other version contorl system, for that matter.

Let’s introduce User B. User B is another contributor on the project. However, they have a clone of the repository on their local machine, and contribute changes and additions through their tracked working copy. User B committed and pushed several changes since Change-set X. All these changes were visible on the remote repository on BitBucket for the project. Now is when the problem came to be about. Both User A and User B now wanted the changes User A made to their private, untracked copy to be merged and pushed to the remote repository, so that they were available to User B. How do they go about doing it?

The biggest hurdle here was that the copy User A had wasn’t tracked under Mercurial at all. That meant that it had no history, nothing of the sort. It just wasn’t being tracked.

I thought about this. After much thinking, I came to a plan that I thought would be worth giving a go at. The plan was like this. First of all, User A had to initialize a Mercurial repository on the local copy that they had made changes to. This would create a new repository. Next, they had to push the changes upstream to the remote repository on BitBucket. However, when they did that, Mercurial aborted, and complained about the local repository being unrelated to the remote repository. Having looked around for what that really meant, I found out that Mercurial complained in that tone whenever the root or base change-set on two repositories were missing or different. This was indeed the case here. However, reading the help page for the “hg push” command, I noticed the “–force, -f” switch, with a note that said that this could be used to force push to an unrelated repository. For what it’s worth, the forced push worked, in that, it was able to push the changes on to the remote repository. It did mess the history and the change-set time-line a bit, because the change-sets from User’s A copy had a different base and parent than those that were on the tip on the remote repository. As far as User B was concerned, when they pulled the latest changes from the repository, they had two dangling heads to deal with and had to merge. The merge resulted in a lot of unresolved files with marked conflicts. Since I wasn’t familiar with the changes in the project, I didn’t pursue User B (and User A) after this point.

While wondering what could be a better way to handle this situation, I posed the question on the #mercurial IRC channel on FreeNode. User “krigstask” was kind enough to provide a solution. His solution went like this. User A would clone the remote repository on their local machine.

$ hg clone URL cloned-repo

They would then jump back to Change-set X.

$ cd cloned-repo
$ hg update -C change-setX

They would copy recursively the changes from their local, untracked copy of the repo into their working copy.

$ cp -r /path/to/local/untracked/repo/* .

They would go through the diffs to ensure their changes are there.

$ hg diff

They would commit their changes.

$ hg commit

Finally, they would merge their commit with the tip of the repo.

$ hg merge

In hindsight, this is a cleaner approach that actually works. I wondered why it didn’t cross my mind when I was thinking about a solution. Many thanks to “krigstask” for taking the time out to explain this approach to me.

I hope I was able to provide something helpful for my version control readers in particular and everyone else in general.

Converting a Git repo to Mercurial

     Until today, we had most of the projects on Mercurial repositories hosted on a Windows box in-house. The Mercurial web interface was set up to provide a convenient read-only access to both contributors and non-contributors within. However, the box in question was unreliable, and so was the Internet link it was alive on. And, in this world, there are few things more troublesome than having your source control server unavailable due to an unexpected downtime.

     What’s special about today is that we moved most of our Mercurial repositories to While they will remain private, the contributors as well as the internal non-contributors will be given access to them. Aside from having the repositories on a central server that we can mostly always rely on, having a lovely Web interface to browse and control the repositories, we also get useful goodies like issue trackers, wiki pages, and an easy user management interface. It is a win-win situation for everyone.

     One of the projects I was working on had only been on a local Git repository. I started work on it at a time when I found myself deeply in love with Git. Since BitBucket is a Mercurial repository warehouse, I had to find a way to convert or migrate the Git repository into a Mercurial one.

     I looked around on the Web and found a lot of people recommending the use of the HgGit plugin. As I understood it, this plugin made possible, among other things, the workflow that involved working on a Git repository and pushing the changesets to a Mercurial counterpart. However, the process of setting it up seemed rather complex to me. Plus, I didn’t want to keep the Git repository lying around when I was done with the migration. I wanted to be able to migrate the Git repository to a Mercurial one, push it upstream to BitBucket and make any changes in future in the source code by cloning from the Mercurial repository. What HgGit did seemed rather overkill for my needs.

     I then discovered the Mercurial ConvertExtension. This extension did just what I wanted: Convert repositories from a handful different SCMs into Mercurial. The process of converting a Git (or any other repository) to a Mercurial one through ConvertExtension is very straightforward.

     As a first step, you are required to edit your global .hgrc file to enable the extension thus:


     You are then required to run the hg convert command on your target Git repository thus:

$ hg convert /path/to/git/repo

     This will migrate the Git repository, and store it in the current directory inside a directory named repo-hg. Once inside the newly created Mercurial repository (and this part is important), you have to run the following command to checkout all the changesets:

$ hg checkout

     You may then push the repository with the usual hg push to BitBucket.


     PS: This blog post really helped get me going in the right direction.