lixo.org

Thoughts on My Experiments With Io

After a fit of Smalltalk envy a while ago, I’ve been playing with the idea of having an instance of my VM being seemingly “always up” and adding more and more code to it in an iterative fashion as I went through developing a system.

On top of that, after looking at the Rails Migrations and other database refactoring tools, it occurred to me that I should be able to do something quite similar to what migrations do to databases in my own code: instead of trying to keep a very regular and smooth codebase, I’d keep track of the changes applied to an “empty” environment, each change providing some value and tests on its own. Also, I should be able to dump some of the VM’s contents back into a script that can be executed on a “clean” VM and the behaviour between both machines should be the same.

My current language of choice, the very small, elegant and simple Io, made it easy to test out the concept. As a prototype-based, extremely dynamic language, you can pretty much reopen anything you want and attach more behaviour to it at any point. So, here are the basic rules I chose to develop an application that looked for collocations in a body of text:
  • Every change script should live on a file of its own, sequentially numbered (like 012_add_foo.io), containing all the changes necessary to the system to implement a particular feature (or story).
  • Every change script should have a test for the new functionality in a separate file in the test/ directory. Preferably, new functionality should be driven by them.
  • The change scripts will be run in sequence by a loader. The loader may also execute unit tests in pair with the change scripts or after all change scripts are loaded, to verify later change scripts did not break any of the existing behaviour. After the loader runs, the system should be ready to use.
  • If the latest change script is causing the code to break, fix it, but scripts that have already been superseded by others shouldn’t be changed unless they stop development of new change script (with a syntax error, for instance). This makes more sense when change scripts are created by other tools instead of humans.

So far, it has worked quite well - I ended up with 25 change scripts, one patch to the runtime APIs (which got accepted, yay!), plus a pretty simplistic loader script:

Directory folderNamed(“src”) fileNames sort foreach(name,
if(name != “main.io” and name endsWithSeq(“.io”),
“Loading #{name}” interpolate println
doFile("src/" .. name)
)
)


Because of my unfamiliarity with Io, I ended up writing pretty brittle unit test suite and broke the “build” a lot (my colleagues would certainly point out here that I break the build a lot, even in other languages, too), and didn’t manage to integrate the test runs in the loader either.

Anyway, the whole point of this exercise was this: it’s possible to do some really cool, dynamic refactorings when developing software like this.

I’m about to write my little Refactoring object, which allows me to do things like:

Refactoring renameObject(Foo, Fubar)
Refactoring renameMethod(Foo, foo, fubar)


The renaming of objects and methods won’t happen instantly across the codebase, as you’d expect from IDEs like IntelliJ or Eclipse - instead, the method or object to be renamed gets replaced with a proxy to the new one and every time the proxy gets hit, the caller method gets its code dumped to a new change script, with the appropriate replacements.
This should give me 100% accurate refactorings with very little disturbance to my development flow (of course, I had to change the way my flow works, but I think that’s a good compromise).