Vim Ex #1

This is the first in what I’ve decided will be a series of infrequent posts dealing with my experiments with Vim.

There’s already a ton of material on this out there, but I’ve decided against my better judgement to add to the million articles and blog posts out there.

This will give me a place to point people to when they ask me how to do X in vim, and, it will simultaneously serve as an archive for my experiments with vim. You know, incase I need to go back and redo my .vimrc (This happens more frequently than I’d care to admit)

Tmux and slime.vim , Click to see full screen.


I have these phases of intense workflow, tool and knowledge gathering followed by long periods where I use some of these newer tools, tips and tricks and the rest are forgotten with time.

I had one of these phases of intense learning these past couple of days.
I had a huge bout with irssi, vim, and a smaller one with pentadactyl and OSX.

I’ll be writing about the Vim side of things in this post. Mostly because I didn’t find this particular combination of things anywhere exactly.

These past couple of days I’ve been mucking around with a couple of functional languages, namely Haskell and Racket.
Now the thing about functional languages (especially the more lisp-y versions) is that they work great with REPLs.
Now I’m no stranger to REPLs, I’m a python programmer before anything else and I’ve had the pleasure of using both the standard python REPL and iPython (a fancier python REPL with loads of extra features).

The thing that almost always annoyed me so far as a Vim user has been the lack of an integrated editor/REPL that allows me run code right from my editor.
Everything I’d tried so far was/had been a hack.
I gradually learnt to use Tmux. Tmux’s split panes helped.
I switched from:

edit text --> save text, close vim -->
run code from cli --> check errors -->
open vim --> repeat


edit text --> save --> switch pane -->
run code from cli --> switch pane -->
check errors --> edit text --> repeat

which was/is somewhat faster… but there was something missing.
I’d seen Emacs users who used Lisp write code and run it right from the editor with code magically running from inside their edit buffer/pane.
So that it actually becomes:

edit text --> save --> run from editor -->
check errors --> edit text --> repeat

No switch panes, buffers, jumping in and out of your editor.

For the lazy programmer this is better because it’s fewer steps.
For the efficiency seeker it’s much much faster and speeds up development significantly.

Now I wanted this for vim, and for a long time I didn’t search hard enough because I got by with the afore-mentioned tmux split method. But no more!

While setting up vim for use with racket I ran across this fine page Vim for Racket

This has some amazing stuff, but the gold-mine here is slime.vim , it gives you the same capabilities advantages as the live REPL evaluation in Emacs, in vim!

It uses a mixture of tmux (or screen if that’s your thing), vim and some behind the scenes file sockets and other magic to dump whatever you want from the editor into the REPL and which then evaluates the code.
It’s brilliant!
The nice thing about it?
It’s not racket specific, you can use it with any language that has a REPL/interpreter.
This means you can use it with : Python, Perl, Ruby, JavaScript (node.js) and many many variants of lisp.

There’s an even fancier variant with many more features of this called slimv which I haven’t gotten around to trying but looks very promising.

Pretty Greek Symbols!



Now to other absolutely fabulous but frivolous discovery.
Vim 7.3+ has a feature called conceal.
This lets you select strings and replace them with a single character. How might this help you ask?
Well, a programming languages are ,in some sense, an abstraction of, or atleast draw heavily from mathematics.

Now mathematics works with lots of Greek Symbols, and some of that has carried over to programming languages. But since it’s hard to actually type λ or Σ directly, we’re left with typing lambda or sum as words instead, which is fine, except it’s not as easy to read as a mathematical equation.

Enter conceal.
It converts lets you convert all your word-y keywords into beautiful looking greek symbols.
Why would someone want to do that?
It looks good!
I did say this was a frivolous one didn’t I?

If you’re a pythonista, there’s vim-cute-python

If anyone’s interested in what goes into the .vimrc it’s not much, all I added was

au VimEnter * syntax keyword racketSyntax lambda conceal cchar=λ
au VimEnter * hi! link Conceal racketSyntax
au VimEnter * set conceallevel=2

Further reading and related links:

Funsize, Internship, Mozilla, Releng

What the hell is funsize? (Introduction)

And that’s exactly how our eyes look after a night of ceaseless monitor peering too.

This is a post I have been putting off writing for a while now.
But now that I’ve finally gotten down to it, let’s begin!

The first question anyone might ask is What the hell is a funsize?
Go look at Funsize Etymology.

Now that we have that out of the way, let’s actually begin.

Funsize or Senbonakura* is technically a “Partial MAR generation service” that generates “Partial MARs on demand”.
Too much jargon? No problemo, Senõr.
We’ll break it down bit by bit.

If you’ve ever used Firefox, and I hope you have, you’ll know that Firefox ships automatic updates to users that install in the background and get applied when you reboot.

Nifty, right?

What you may or may not know is the stuff that goes on behind the scenes.
Getting updates to your browser and getting them so that they don’t break your browser is harder than you might think. We and by we I mean your friendly, probably-not-in-your-neighbourhood Release Engineering Team do a lot of stuff to make sure that doesn’t happen.

There’s a huge Ginormous Pipeline.

My project focuses on a small part of it.

Applications are growing in size these days and so are their installers. Weighing in at 35MB for Linux64 install, 28MB for the Windows setup and a hefty 58MB for the MacOSX install of the latest Firefox Release Builds.They are big, so whenever you’re served an update, you aren’t given the entire new installer to install off of unless it is deemed absolutely necessary.

We at Mozilla value your bandwidth, because not everyone lives in a country with unlimited bandwidth. We at Mozilla also value our bandwidth because serving complete installers where it’s not necessary isn’t great for resources, server load and CDN costs.

Enter, Partial Updates.

Partial Updates, as the name suggests, are not Complete Updates.

The rationale behind Partial Updates is that we don’t really need to give you an update in the form of an entire new installer because even though things change from version to version; a lot of thing also remain the same.

So what we do instead is just give you the bits that change.

It’s called a ‘diff’, or more precisely in our case, a ‘binary diff’ in developer parlance. We figure out what changed between the version you have and the version you’re being updated to and then send you the diff. Diffs are typically much smaller than the entire installer.
Sometimes, up to 1/7th the size of a Complete update, depending on what platform and which channel you’re on.

This reduced size seems like a win-win for everybody involved, and it is … mostly.

Unfortunately generating these ‘Diff’s is not a computationally cheap process.

Infact it can take a while.

My super fast, work machine can take anywhere between 1-5 minutes to generate this, and the production machines that actually generate the official “Partials” aren’t nearly as powerful. So expect times to be longer than those on my developer machine. Now 3 minutes (on average, say) to generate an Update doesn’t sound like a lot.

And you’re right, it’s not a whole lot… for one Partial.

Here’s where I’d like to emphasize the diversity with which Mozilla and thus the scale at which RelEng works kicks in.

We have upto 97 different locales we ship.
For 4 different Operating Systems.
And we do such shipments everyday.
Nightly and Aurora in addition to the less frequent Bi-Weekly Beta and 6-Weekly Release Builds.
Oh and did I mention we go back upto 3 versions of Firefox everytime?

The numbers add up.

I’ll would let you do the math.
But just to drive the point home, we spend 9 days of compute time.

Every. Single. Week.
Just for generating updates.
And this is not even counting the other branches like UX or E10S.

Like I said, it all adds up.

Now this is where my application comes in.
Caveat lector: Senbonzakura/Funsize is far done, and there’s still stuff to be done before it becomes “profitable” so to speak, but that doesn’t mean that it’s goals aren’t ambitious.

So what does my application do?

It does one thing and one thing only; it generates Partials.
And it’s sole purpose, is to do so efficiently.
In essence, Senbonzakura/Funsize is a web service that generates a partial between any two versions of Firefox you need it to.

There are a lot of advantages to having a dedicated service for something like this.

  • The service can optimize for speed for generation of partials in ways that the current “Generate-Partials-as-part-of-Release” process can’t.
    There are possibilities of diff and file level caching across versions and locales. Imagine generating a diff for one locale of Firefox and being  able to use it across locales. If we go by ballpark numbers that should mean reduction in compute time by a factor of 89. In practice it may not touch that number, but it will still be a sizeable factor nonetheless.
  • We can generate partials that were not possible before, or had to be generated by hand.
    Sometimes a lot of users are using an older version of Firefox. A version that had a massive uptake at the time of release, but maybe not a whole lot of users updated it since.
    Now they’re all clamouring for the next big release.
    If the version in question is more than 3 versions before the current release, they’ll end up being given a complete installer instead of a partial, because we do want them to update, for their sake and ours. Once Senbonzakura/Funsize comes into play, the entire process of identifying a large user pool and migrating it to a new version via partials can be automated, because the service can generate a partial on demand.

Having a separate service do the heavy lifting also moves the update generation process out of the critical release pipeline. This results in faster builds and less chance of bustage.
There are also other smaller, albeit equally important use cases, where we need to migrate people from an no-longer-supported locale to the nearest supported one.

What does it run on, you ask?

It’s basically a flask app, with a celery backend, while tying into a self-written file cache and a SQLite or MySQL database, depending on your choice. We’re also doing a little bit of docker with a dash of AWS.

It sounds fairly simple, but I’ve had my share of architectural and debugging “nightmares”.
But maybe that’s just an Intern-I-have-no-idea-what-I’m-doing.jpg thing.

I’m an Intern, did you honestly expect better?

The application is still very much a work in progress and there’s a lot to be improved, but it’s getting better and hopefully I’ll have the chance and time to keep working on it to get it in even better shape.

Now, if the ELT could please switch the spotlight to point towards our audience.

If you dear reader found this project interesting and you are able and willing to contribute, read on!

Mozilla’s Release Engineering Team is looking for talented young contributors like yourself to help out! Don’t waste a day, join #releng on IRC TODAY and ask how you can contribute to awesome!

If you’re looking for information on how to get started with Senbonzakura/Funsize in particular, I have this amazing, albeit incomplete HOWTO Start contributing guide that I wrote on a bus back from LA with my eyes closed.
</end id=’shameless plug’>

* It will always and forever be Senbonzakura to me.


I did a talk about Senbonzakura/Funsize at the Mozilla SFO office (as part of my Internship) go watch it here!

Funsize, Internship, Mozilla

What the hell is Funsize? (Etymology)

It tastes reaaally good.

The origins of the name, the story behind it all, in a familiar annoying Magazine style Q&A.

Is it a new size of clothing?

Is it the name of a compact family hatchback? (Car companies do silly names like this don’t they?)
It very possibly could’ve been, but it’s not.

Were you on prohibited substances when you came up with that name?
Um … No?

Now, now, my shocked audience, these are all fair questions.
Please take a seat and get comfortable, get popcorn and your choice of refreshing drink because this will be a long tale.

Or maybe not.

I began working on a project with the Release Engineering Team in May ’14. The project was originally titled ‘Senbonzakura’ after the ‘Zanpaktuo’ from ‘Bleach’. Upon introducing the name to my team, everyone had a hard time pronouncing and remembering how to spell it right.

Except Nick.
Nick is a boss and seems to have a knack for remembering long and hard-to-pronounce names. Maybe it’s from his days remembering all those unintelligible high atomic number elements’ names as an Atomic Scientist.

But I digress.
It was decided after much debate and controversy (not really) that the project had to be re-named.

My project deals with something called “Partial MARs”, so I figured calling it “Mars Mini” after the namesake chocolatey treats would be clever. Our non-American English speaking team-mates suggested calling it “Mars Funsize” instead because that’s what it’s known by in other parts of the world.

And so it came to be.

We christened our new-born Partial MAR generation service “Funsize”!
Which kind of fits, because I had a lot of fun working on it.

And you can too!
If you think Python and scalable Web Applications are your kind of thing, you might want to take a look at the Contributing to Funsize and How to contribute to Release Engineering pages and see if Funsize is your idea of fun!
</end id=”shameless plug”>

Internship, Technical

Oh what a Travis-ty.


This post is about how I managed to, one part coax and one part coerce Travis-ci into doing what I wanted it to do.

It’s a bit lengthy and a bit technical so bear with me.

I could write a TL;DR for it, but I’m not very inclined to do so because I feel this post is meant to highlight the process rather than the end result more than anything else.

There comes a point in time in every developer’s life where he has to get his hands dirty with Databases. There also comes a point in every Open Source developer’s life where he decides to try out Travis-ci, because tests are good tight? Sure, they are.

Continuous Integration testing is even better!
Write tests, run them on every commit, get email telling you whenever you broke something, it’s great right? Sure it is! It catches human errors and gives you a safety net to fall back upon.

I’ve been working on a project for a while and I had a couple of unit-tests I’d written for it (Shoutout to Hal for nagging me to get those tests in). I also managed to write a hacky integration test that seems to do the full run through.

Now a couple of days ago, I decided to add in MySQL support for the application on the database side, so far I’d been happily using SQLite with no issues. I was using SQLAlchemy and figured I could just change the Database URI and MySQL would be drop-in replacement.

Naive move.

After a couple of hours of Googling, some help from the people in #mysql and more help from my team I finally figured that I needed to tweak the mysql configuration file (my.cnf hence forth)*.

After making the required changes in my.cnf and in the Schema, I head over to my application, fire it up and do a run through for the Nth time to make sure things work as expected… SUCCESS!

Now comes … the hard(er?) part.
Remember the integration test I told you about?
Well it used to run on Travis-ci just fine… until I decided to add MySQL support.

Those of you that have ever had the pleasure of using Travis, know that Github has tried to make things very simple. You tell Travis what you want it to do via the .travis.yml. You tell it what your tests are and how to run them and it’ll go ahead and do just that.


Or so it is until you want to configure MySQL in ways that are not exposed by the .travis.yml file.

There are always trade-offs between simplicity and power, so this should’ve been been expected, this should have been common sense. But like a child who doesn’t learn fire is hot until he pokes his finger in it, there are some lessons that make far more sense in retrospect.

I felt a tad disappointed and bewildered at first, not knowing where to begin.
Part of me abandoning all hope of having integration tests for my MySQL-support branch. But, and yes there is ‘but’, I decided to stick it out and see what I could figure out.

So I began my journey to hack** around the Travis build system.
There was a lot to do to even begin figuring out what to do.
Things become easier if you break down the process into the following major steps:

  1. Is it possible?
  2. What do I need to do it?
  3. How do I fulfill that need?

To answer #1 I needed to figure out if I had control or some way to interact with the MySQL server and it’s configuration other than the .travis.yml.

To do this, I started changing the Travis config file to see if I could cat the common locations of the MySQL config file, most notably /etc/my.cnf. I also tried starting, stopping and getting the status of the mysql service.
Then I tried executing a couple of MySQL commands to set the variables I needed.

None  of these yielded positive results.

I decided figuring out what kind of environment I was running in before trying to move further might be a good idea. I ran the basics to figure out what was going on :- pwd, who, groups, mysql --help.

Then something happened, a ray light shone through the darkness.
While looking at the Travis log for the push I saw something that piqued my interest.
Travis was using sudo to install some of the other services I need in the Travis configuration file.


An incandescent lightbulb came on somewhere.
Could it be?
Could I really have root access for this box? I quickly wrote up a sudo ls to test my theory and whaddya know? It worked! Now that I had root access I was 99% sure what I wanted to do was in fact possible regardless of how hard it might be.

I set out with more confidence knowing that I now had the powers of the root bestowed upon me.

Next up was figuring out #2.
There are plenty of ways to set the options I wanted for MySQL and I had to figure out which one would work. Following the principle of least resistance I tried setting the paramters I wanted using MySQL statements. I tried various combinations, preceding and succeeding the mysql statements with restarts.

No success.

Well this wasn’t working and as someone succinctly put it, “Insanity is doing the same thing, over and over again, but expecting different results.”
Now I like to think I am not insane and these combinations weren’t really getting me anywhere, except maybe on the Travis server abuse/mis-use list.

So I decided to try and tweak the configuration files.
The question was figuring out which one.
I began trying to ls -al the expected location(s), hmm, no luck. But mysql --help tells me there are more locations I can find the configuration file. So I just decide to cat all those at the same time^. This told me that the file I wanted was /etc/mysql/my.cnf.

Got it. I had #2.
I could edit this file and have what I wanted.

So on to #3.

Now ideally I should be able to overwrite this file with the bare minimum I needed and get it working.

Naive. Naive again.

Everything stopped working the moment I did this.
To be fair, the default my.cnf had a lot of configuration and it was probably there for a reason.
So I tried a different route.

I can just go in add the things I want in the existing file.
Sounds simple enough right?
Except for the part where I don’t have access to the file. Or an editor to edit the file.

Or do I?
Enter sed. Sometimes known by it’s longer and more expressive name stream editor. So I wrote up a couple of sed commands that add in the lines to the file in the required part of the file. After confirming that the substitution actually works, I pushed with sweaty palms.


Sure it’s not a foolproof solution and it might break whenever they modify the file, but it works and that’s more than I could say before I started.
And isn’t that what counts?

Scroll down to see what my glorious git log looked like after I was done.

^ Why at the same time you ask? Because Travis will error out on the first failure and I did not want to try all different combinations to figure out which one, it’s a waste of time and resources.

* If you’re interested in reading what the actual technical problem I was dealing with was, then read on.

I had a 257 long character key as one of the fields in my database (in a table in my database, you pedantic people). Now all this works fine as long as you’re running SQLite because SQLite doesn’t really care much about what you’re storing a whole lot, it typically accepts whatever you give it without question and dumps it in a file, it’s the same when you’re retrieving stuff from the database. In MySQL though, there are optimizations for speed and redundancy and lots of other stuff.
This means MySQL cares more about the data that you store in it. Enter DataTypes. With these DataTypes also come constraints on their size and length. As you might’ve guessed, MySQL doesn’t support storing a 257 character long key (VARCHAR or TEXT) with default settings.
It’s an InnoDB thing.
But it does give you a way to configure your options to force MySQL to allow larger keys and this is infact what I ended up doing.
The stuff you need in your my.cnf is:

#InnoDB config
innodb_file_format = Barracuda
innodb_large_prefix = 1

You’ll also have to change your row storage format to DYNAMIC.
You can do this in SQLAlchemy like so __table_args__={'mysql_row_format':'DYNAMIC', ...}

** I use ‘hack’ in the sense it was originally meant to be used; to imply exploratory interaction with a computer. See this and this for more details.

My Glorious git log

My Glorious git log

Internship, Technical

Bah! Debugging.

This post is just going to be a little ramble about how dealing with bugs is a PITA.
It’s also a post about why debugging can be fun sometimes. Emphasis on sometimes.

How we typically debug
Naive debugging process





Over the last couple of days I’ve been plagued by one of those phases in a developer’s life where everything (and by everything I mean your code) seems to be breaking and falling apart around you and you can’t for the life of you figure out why.

You begin by suspecting a bug in your program, maybe an incorrect parameter to a function call, maybe a little missing punctuation or something silly like that. You look for it, and on most days you’d find it and fix the minor oversight caused by lapse in concentration.
But today is not that day.
Today is one of those days.
You don’t find the bug, it’s not going to be that simple.
You look for it but you don’t find it, the plot thickens.

It’s a slippery bugger, but no worries you tell yourself, you’ve got this. You run through what the program should be doing in your head, and then think back and try to confirm that what you wrote matches up. At this point two things can happen, you either convince yourself that you did in fact implement what you were thinking of or you second guess yourself.
If you did second guess yourself, you might’ve started running through what you wrote to just to calm the building annoyance and/or panic.

At this point you typically find something. You go “Oh haha, this. Silly me.” , fix it and secretly feel happy inside because you know you got off easy, you re-run your application and tests.
And they fail.
And your application crashes.
And everything burns, the world is thrown into chaos and all hell breaks loose.
Only this time it’s just a little worse than it was before.
An old saying goes “Hell hath no fury like a woman scorned”, the author obviously hasn’t had the pleasure of encountering a misbehaving computer program.
Because just in case you forgot, It’s one of those days.

At some point between the many back-to-back iterations of the panic followed by helpful yet unrelated bug-fixes, I
often sit back and stare at my laptop with a sense of deep mistrust.
What is this unfathomable black magic? Why does my dearest disobey me so? After about 4 minutes of deep philosophical questions and about 2 minutes into an existential crisis a quote typically floats back into my head. It basically boils down to “Computers don’t make mistakes”.
This is true, computers are simply machines doing exactly what they’ve been told to, they’re following instructions.

If your application is not working, your instructions aren’t right. Now if the instructions are wrong you just need to find the
wrong ones and fix them. Once you do find them and fix them and get things working, the euphoria
compares to little else.

I ran into a few strange bugs back to back, but the strangest one took away a fair chunk of early morning hours to figure out. There’s an operation (I say operation but it’s an entire pipeline) that’s done on the same two input files, the operation itself should be entirely platform independent because it’s written in python for the most part with a couple of cross platform shell scripts thrown in. So one would reasonably expect the output to be the same regardless of platform.
The problem is the assumptions in that thought processes, the shell scripts worked fine and did everything right, the output file the pipeline spit had all the right contents too, but the md5sums did not match up. I went through the entire aforementioned struggle looking for what might be causing the problem.
What was it? It was sort. GNU coreutils sort. Apparently the way things are sorted on OSX is different from the way they are on Linux. Why? I don’t know, there’s no locale differences or special flags being used, the difference in output just is.
See this Apple Stack Exchange Answer for reference.
It took me a fair about of time to hunt these down, and I might never have tracked this one down had it not been for helpful advice from experienced team-members and fellow interns.

This is where a well thought out debug routine would come in handy.
I’ve stolen mine from Sherlock Holmes because if you’re trying to get to the bottom of a mystery, you might as well put out your detective hat on, right?. The basic idea is to keep digging until you can dig no further and at that point in the words of the great detective himself “when you have eliminated the impossible, whatever remains, however improbable, must be the truth”.

The key here is to actually eliminate the impossible before making conclusions about improbable.  The generic way of doing this as I understand it is to formulate a hypothesis about what could be going wrong and/or where. Then you need to do something that will either confirm your hypothesis or absolutely irrefutably disprove it. If the thing you’re doing neither proves nor disproves your suspicions you’re doing it wrong and you should trying doing something different, otherwise you’re just throwing mud at a wall and expecting some of it to stick.

Once you’re done with the ‘thing’, you now know which chunk of your application is causing trouble. You can now formulate a new hypothesis about which smaller chunk in your newly determined chunk is causing trouble and repeat the processes to ever finer granularity.

In more formal terminology the idea of debugging can be distilled to the following:
“Finding your bug is a process of confirming the many things that you believe are true, until you find one which is not.”




The things mentioned in this post may seem self evident or obvious to more experienced hands, but this line of thought is interesting from my point of view because I’ve just begun to understand that debugging isn’t just throwing a bunch of printfs in your code in hopes of figuring out what went wrong by looking at the output. It’s a fairly logical and scientific procedure that’s closer to medical dissection than anything else.


Related reading:


Internship, Mozilla, San Francisco

Summer in Cali!


I’ve been wanting to resume writing for a long long long time now.
A lot has happened since the last time I wrote for this blog and I’m happy to report a lot of it has been good!

The long and the short of it is that I’ll be (am) interning with Mozilla’s Release Engineering Team.
These are the guys that manage the releases, build infrastructure and release automation. In short these are the guys responsible for making sure Firefox ships out in time. Make no mistake it’s no easy task to ship to half a million users, any lingering doubts about the complexity of this task can probably be dispelled by looking at this diagram.

The office space we have is amazing (pictures below). Interns have their own desks and area where they can muck around and not anger the wrath of the employees with all the noise we make. On our first day as interns we got a whole lot of Mozilla schwag given to us by Jill and Misty (our friendly neighbourhood recruiting team), this schwag included a very nice Firefox branded laptop bag, a 2014 interns hoodie and Mozilla branded socks and Sun-Glasses. So. Much. Win.
I also got to meet a lot of fun people, including my current mentor Hal Wine and my GSoC-mentor Clint Talbert, both of them are really fun people. It’s very exciting and a lot of fun finally meeting face to face with the people you’ve worked with and have been talking to over the IRC for such a long time.

I’m working on a project (code named Senbonzakura) for the team, which basically involves a web service that generates Firefox updates on demand, but, more about that in the next post!

I’m in San Francisco, one of the most beautiful cities in US. If one were to try and imagine the existence of such a thing as perfect weather, San Francisco’s would be pretty close. There’s the fresh cool breeze coming from off of the sea, there’s the warm but not hot (a tad mild at times, but maybe that’s just me) California sun beating down on a city by the bay. It gets a little too chilly for me at times, but it’s nothing a thin sweater or light jacket can’t fix and did I mention the amazing sea breeze?

There’s always a lot going on in the city at any given day, but it’s the smaller attractions that are the most fun.
There’s so much to take in that it can get very overwhelming very quickly at times, but mostly it’s fun to watch the picturesque city go about doing whatever it likes. I’ve seen people playing drums in the middle of the street, I’ve saw bikers do wheelies, stopies, acrobatics, spins and many many indescribable moves that seemed so flawless that it was almost inhuman. A friend and I ran into musicians jamming with an acoustic guitar and violin on the street and stood there mesmerized for a while before we realized the song had ended.

In the time I’ve been here I’ve had a lot of fun and a lot of fun kitchen accidents as well, if you know what I mean*.
The US is a strange yet exciting land and despite exposure to a lot of American and Western media and culture, there are somethings you never truly understand until you experience them first hand.

I’ve been trying to take pictures to summarize my journey, and although I admit I haven’t done a good job of clicking pictures religiously, I do still have a few pictures to share, so here goes:

This is the view I was greeted with when I first entered my apartment, needless to say, I couldn’t say much for a while afterwards.

This is the view from my office desk. If you tell me you have a better office space, I’ll have a hard time believing you.

This is a picture of me desk. Jill gave us all cute little fox thing. Cute little fox thing keeps me company when me is at work.

Everyone this is Island Princess, the mammoth Cruise Ship anchored at one of the Piers. I had to use Panorama to take picture of this massive lady because she wouldn’t fit in a single frame.

This is what a deck full of sea lions looks like. They’re noisy and they’re smelly, but there’s something about watching them laze around without a care in the world that’s fun to watch.

The bay bridge lights up at night and this is what I see from my bedroom window every night as I sleep. (Well it’s much better than this actually, my phone camera doesn’t do the view any justice)

Now that I’ve overcome inertia, hopefully you’ll see more posts from me soon!
And with that hopeful promise I shall sign off for now.


* I burnt the toast and set the fire alarm off. Yes, this really happened.

Google Summer of Code, Mozilla

The Grand Finale


This will probably be my last post regarding my Summer of Code with Mozilla, and I imagine if I were writing this with paper and quill, I would indeed have been writing this last entry with a heavy hand.

There exists a saying that tells me “All good things must come to pass”.
Although sadly true, sometimes I wonder if some things are too good to deserve the same fate as all the other good things.

My stint with Mozilla being the case in point. The past 3 months have been immensely enjoyable, full of excitement and activity. I had the pleasure of working with an amazing team of people who work for a noble cause. My stay with the Automation and Tools team has been an enriching experience and one that I shall cherish for a long time.
My writing might convey a sense of parting, but thankfully once introduced to the world of OpenSource one can never truly part.  I hope to be able to stay and contribute to Mozilla in the future as I have in the past.

Since this is the penultimate if not the ultimate post in this series of Summer of Code posts, I feel the need to try and briefly summarize the events of days past.

My Project with Mozilla over the summer consisted of writing tests for Mozbase. Mozbase is a base library used by most of the other test harnesses that are used to automate the installation, logging and testing of the various software products that Mozilla builds, Firefox and FirefoxOS being the most notable.
One by one I picked up the modules in Mozbase and wrote tests for them at the rate of roughly a module per one and a half week.
I must pause here and point out how well planned the schedule for my project was, allowing me enough time to cope with the new code, before handing me slightly more complex modules, and for this I owe a debt of gratitude to Clint Talbert(my mentor for the project) who, basically, had it all figured out.

I started my quest with Mozfile (#885224), which is a cross platform utility to handle file I/O operations. Mozinfo (#885145) came next, this module was a bit of a surprise to me for I hadn’t seen a module like this before. All it literally does is give information about the platform on which it runs, or sometimes simulates information about a different platform for the purposes of testing.

Having worked with, understood and written tests for two modules successfully, I now faced Moznetwork (#796017). This is where I first really ran into what would later become a recurring challenge throughout my project, this challenge goes by various names, sometimes it’s called WindowsError, sometimes another MacOSX only exception, most people know these collectively as Cross-Platform issues. After copious amounts of time and effort were spent, mostly by jhammel, wlach, ahal and Mook who would’ve burst a vein answering my repeated questions had they not been endowed with Monk-like patience, Mozinstall tests finally started working properly. Moznetwork on the other hand was kind towards me, I got to play around with network interfaces and write some regex which is always fun.

On to another network-ish module Mozhttpd (#889709)  which is a simple webserver written in python used for internal testing. Mozhttpd was particularly memorable because it was the only module whose tests failed for me after both a fresh pull and a clone, after debugging I came to know that it was a proxy/environment issue(I access the internet via a proxy, so I have http_proxy variables set everywhere). I’m not sure if I filed a bug for it though, since it’s a weird edge case, lucky me sitting right on that edge :P .

Moznetwork marked the end of the simpler modules, time to meet the heavyweights, Mozdevice(#894062), Mozprofile(), Mozprocess().
Up first was Mozdevice, this was a particularly enjoyable module, I got to muck around with android internals, emulators and adb a bit. Wlach really helped me through this one with lots of tips that were invaluable when I needed to set up the test environment. Being a large module, this one took me more time than the earlier modules, a little over 2 weeks, but by the end of it we had tests for the DeviceManager class.

After wrapping up Mozdevice I moved onto Mozprofile(#898265). Mozprofile is a module that handles almost all firefox profile related tasks, from major profile data to addons. Mozprofile saw me reading through the code more carefully than the other modules, partly because of the tricky cleanup/__del__ procedure and partly because I’d had not so pleasant run-ins with mozprofile in the past. It wasn’t nearly as bad as I’d orginally imagined it would be, and infact turned out to be quite an enjoyable experience. I ended up mostly writing tests for the addons manager. I also found a couple of tiny bugs and filled a few more during the course of my “investigations”, some of which were fixed right away incidentally.

Lastly we come to Mozprocess (#778267). Mozprocess is a monster module in terms of complexity and size (I exaggerate of-course, but yes it’s big and complex). Existing mozprocess tests were written in a mixture of C and python which made them hard to compile and run, especially on the build slaves on Mozilla’s automation infrastructure.
My first task was to try and port the existing C tests to python, or if that didn’t work out rid the existing C tests of the external library dependency making them easier to compile, run and automate. The python porting worked out and I started rewriting all the existing tests to work with the new API that I and jhammel came up with.
The new API allows creating arbitrary process trees, with multiple children each with their own timeouts at each level of depth. Although this might sound fancy, it’s nothing more than a little rewriting of the controlling .ini manifest file. The processlauncher code did require a major rehaul for the new API though. During the process of porting and re-doing I wrote some of my most beautiful and comprehensive documentation, even if I say so myself. :P
If you’ve been reading carefully, I mentioned my arch Nemesis WindowsError, well guess what? It decided it was about time to drop by while I’m on my final module. Long story short windows tests still fail on Windows and that needs fixing, the *nixs on the other hand are running right as rain.

All code that I’ve produced over the summer has been reviewed by atleast one member of the Automation and Tools team and has been checked into the github repository, which can be found here. All documentation in the code takes place within the code itself, and all checked-in code contains the relevant documentation.

Related misc. fixes and filings: