How to write MCollective Agents to run actions in background

The Marionette Collective, or MCollective in short, is a cutting-edge tech for running system administration tasks and jobs in parallel against a cluster of servers. When the number of servers you have to manage grows, the task of managing them, including keeping the OS and packages installed on them updated, becomes without a doubt a nightmare. MCollective helps you drag yourself out of that nightmare and into a jolly dream, where you are the king, and at your disposal is a powerful tool, by merely wielding which you can control all of your servers in one go. I’ll probably do a bad job of painting MCollective in good light, so I’d recommend you read all about it on MCollective’s main website.

Like every good tool worth its while, MCollective gives you the power to extend it by writing what it calls agents to run custom code on your servers to perform any kind of job. You can read about agents here. MCollective is all in Ruby, so if you know Ruby, which is a pretty little programming language by the way, you can take full advantage of MCollective. Incidentally, a part of my day job revolves around writing MCollective agents to automate all sorts of jobs you can think of performing on a server.

For a while I have been perplexed at the lack of support for being able to run agents in the background. Not every job takes milliseconds to finish itself. Most average-level jobs, in terms of what they have to do, take anywhere from seconds to, even longer, minutes. And since I write an API which uses MCollective agents to execute jobs, I often run into the problem of having the API block while the agent is taking its sweet time to run. As far as I’ve looked, I haven’t found support within MCollective for asynchronously running actions.

So, I got down to thinking, and came up with a solution, which you could call a bit of a hack. But insofar as my experience testing it has been, I’ve yet to face any issues with it.

I’ve an example hosted on GitHub. It’s very straightforward, even if crude. The code is self explanatory. At the heart of it is creating your actions in agents to fork and run as childs, without having the parent wait for the childs to reap, and having one extra action for each agent to fetch the status of the other actions. So, essentially, the approach has to be taken for every agent you have, but only for those actions which you wish to run asynchronously. With agents in place, all you’ll need to do is call the agents thus:

$ mco rpc bg run_bg -I node -j
{"status": 0, "result": "2434"}

and repeatedly fetch the status of the async action thus:

$ mco rpc bg status pid=2434 operation=run_bug -I node -j
{"status": 0, "result": ""}

It’s a solution that works. I’ve tested it over a month and a half now over many servers without any issues. I would like to see people play with it. The code is full of comments which explain how to do what. But if you have any questions, I’d love to entertain them

Advertisements

Fastly’s CDN accessible once again from Pakistan.

Three days ago I wrote about a particular subnet of Fastly.com’s CDN network becoming null-routed in Pakistan. Since the affected subnet was from which Fastly served content to users from Pakistan, websites, such as GitHub, Foursquare, The Guardian, Apache Maven, The Verge, Gawker, Wikia, even Urban Dictionary and many others, which off-load their content to Fastly’s CDN networks stopped opening for users inside Pakistan.

However, as of today, I can see that the null-route previously in place has been lifted as mysteriously as it was placed. The subnet in question, 185.31.17.0/24, is once again accessible. I have tested from behind both TransWorld and PTCL. While I don’t know why it was blocked in the first place and why it has been made accessible again, whether it was due to an ignorant glitch on someone’s part, or whether it was intentional, I am glad that the CDN is visible again.

If you are observing evidence otherwise, please feel free to let me know.

185.31.17.0/24 subnet of Fastly’s CDN null-routed in Pakistan?

I rely heavily on GitHub and Foursquare every day, the former for work and pleasure, and the latter for keeping a track of where I go through the course of a day. Since yesterday, though, I have been noticing that pages on GitHub have been taking close to an eternity to open, if not completely failing. Even when the page loads, all of the static content is missing, and many other things aren’t working. With FourSquare, I haven’t been able to get a list of places to check in to. Yesterday, I wrote them off as glitches on both Foursquare and GitHub’s network.
 
It was only today that I realized what’s going on. GitHub and Foursquare both rely on Fastly’s CDN services. And, for some reason, Fastly’s CDN services have not been working within Pakistan.
 
The first thing I did was look up Fastly’s website and found that it didn’t open for me. Whoa! GitHub’s not working, Foursquare’s not loading, and now, I can’t get to Fastly.
 
I ran a traceroute to Fastly, and to my utter surprise, the trace ended up with a !X (comms administratively prohibited) response from one of the level3.net routers.
 
$ traceroute fastly.com
traceroute: Warning: fastly.com has multiple addresses; using 216.146.46.10
traceroute to fastly.com (216.146.46.10), 64 hops max, 52 byte packets
[...]
 6 xe-8-1-3.edge4.frankfurt1.level3.net (212.162.25.89) 157.577 ms 158.102 ms 166.088 ms
 7 vlan80.csw3.frankfurt1.level3.net (4.69.154.190) 236.032 ms
 vlan60.csw1.frankfurt1.level3.net (4.69.154.62) 236.247 ms 236.731 ms
 8 ae-72-72.ebr2.frankfurt1.level3.net (4.69.140.21) 236.029 ms 236.606 ms
 ae-62-62.ebr2.frankfurt1.level3.net (4.69.140.17) 236.804 ms
 9 ae-22-22.ebr2.london1.level3.net (4.69.148.189) 236.159 ms
 ae-24-24.ebr2.london1.level3.net (4.69.148.197) 236.017 ms
 ae-23-23.ebr2.london1.level3.net (4.69.148.193) 236.115 ms
10 ae-42-42.ebr1.newyork1.level3.net (4.69.137.70) 235.838 ms
 ae-41-41.ebr1.newyork1.level3.net (4.69.137.66) 236.237 ms
 ae-43-43.ebr1.newyork1.level3.net (4.69.137.74) 235.998 ms
11 ae-91-91.csw4.newyork1.level3.net (4.69.134.78) 235.980 ms
 ae-81-81.csw3.newyork1.level3.net (4.69.134.74) 236.211 ms 235.548 ms
12 ae-23-70.car3.newyork1.level3.net (4.69.155.69) 236.151 ms 235.730 ms
 ae-43-90.car3.newyork1.level3.net (4.69.155.197) 235.768 ms
13 dynamic-net.car3.newyork1.level3.net (4.53.90.150) 236.116 ms 236.453 ms 236.565 ms
14 dynamic-net.car3.newyork1.level3.net (4.53.90.150) 237.399 ms !X 236.225 ms !X 235.870 ms !X

Now, that, I thought, was most odd. Why was level3 prohibiting the trace?

I went looking for a support contact at Fastly to try and get anything that could explain what was going on. I found their IRC chat room on FreeNode (I spend a lot of time on FreeNode), and didn’t waste time dropping into it. The kind folks there told me that they’d had reports of one of their IP ranges being null-blocked in Pakistan. It was the 185.31.17.0/24 range. I did some network prodding about, and confirmed that that indeed was the subnet I couldn’t get to from within Pakistan.

$ ping -c 1 185.31.18.133
PING 185.31.18.133 (185.31.18.133): 56 data bytes
64 bytes from 185.31.18.133: icmp_seq=0 ttl=55 time=145.194 ms
--- 185.31.18.133 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 145.194/145.194/145.194/0.000 ms

$ ping -c 1 185.31.16.133
PING 185.31.16.133 (185.31.16.133): 56 data bytes
64 bytes from 185.31.16.133: icmp_seq=0 ttl=51 time=188.872 ms
--- 185.31.16.133 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 188.872/188.872/188.872/0.000 ms

$ ping -c 1 185.31.17.133
PING 185.31.17.133 (185.31.17.133): 56 data bytes
--- 185.31.17.133 ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss

They also told me they’d had reports of both PTCL and TWA where the range in question was null-routed. They said they didn’t know why it had been null-routed but would appreciate any info the locals could provide.

This is ludicrous. After Wi-Tribe filtering UDP DNS packets to Google’s DNS and OpenDNS servers (which they still do), this is the second absolutely preposterous thing that has pissed me off.

Django Login Session ID Extractor Script

In the not too distant past, I worked on an interesting web project in which my participation lasted a brief period. I enjoyed the time I spent working on it. It was a project based on Django and Python, two technologies that I simply love working with. The last tasks I worked on had to do with benchmarking the performance of the Django web application, as well as the webserver it was powered on, which was Apache.

I had to use the excellent and extensive Apache Benchmark utility, also commonly known as simply ab. If you’re not familiar with ab, I recommend you check it out. The problem I faced was not being able to benchmark views that required an authenticated web session. Simply put, I wasn’t able to use ab against views that were login protected. In order to solve this limitation and also to automate the process, I threw together a BASH shell script which I called The Django Login Session ID Extractor. It used a combination of Linux tools to log into the Django application via the login view, and extract out the SESSION ID thus created. This SESSION ID, then, could be passed to the ab command in order for it to hit login protected views.

This script proved really useful during testing, and would’ve even more had I continued to work on it. However, I decided to make the script available on GitHub for the world to use. I am hoping someone might make use of it.