Programming

Thursday, September 28, 2006

A new VPL based on AOP

I've been reading about aspect-oriented programming (AOP), and had it explained to me that concerns "cross cut" methods. Then John put his fingers together in a mesh. This was a bit of a kerchink moment. You could create a visual programming language (VPL) in which aspects are displayed on the y axis and methods or functions are displayed on the x axis. This would show very clearly which functions had which concerns.

Friday, September 08, 2006

HCI Considered Harmful?

I've just come back from VL/HCC 2006 having presented a short paper on one of my pet projects: Visula. Here is the paper, and here are the slides.

Overall it was a very positive experience. I find the general area interesting - if the conference was a bit cheaper and always in the UK I would want to attend every year.

What surprised me a lot was the amount of end-user and novice programming. The papers were roughly divided into spreadsheets, learning through play, visual support for developers, and a little bit of visual parsing.

One comment is that it is a very applied subject. Everything follows the same pattern: here is a problem, here is a tool to solve the problem, we gave it to some users, here are the numbers. This is actually pretty boring! The field does not seem to have moved on very far. I think the mass of papers is because everything is done in finiky little steps. No fundamentally new ideas were presented in the entire conference, and I had no "aha" moments. Everything had to be proven to be of real benefit to a group of users, which is certainly not blue-sky research.

A problem I see is that the word "programming" and "software" isn't used in the conventional sense, and I would much rather see the term end-user development or engineering. I see this as a ploy to place these subjects into computer science where funding is more plentiful, even though this is strictly speaking interdiscpinary with accountancy, [software] engineering, psychology or education.

I particularly enjoyed was the "how-to" session: how to do research, how to write a paper, how to transfer technology to industry. The guidelines they gave were completely idiot-proof. I definitely learned a lot, and wish my PhD supervisor could have told me some of those things.

The best paper award, by Christopher Hundhausen et al was very good because it fitted the formula well. It really brought out the questions and hypothesis clearly, its references were complete and had good coverage, data was collected well, strong statistics, and tackled an interesting question. The work itself was secondary - it was okay I suppose!

The keynotes Ron Baeker and Thomas Green were top-notch, and my opinion about Thomas Green improved from irritating pedant, to warm, clever, insightful, interesting and entertaining. Pioneers have a certain charisma and drive that keeps them going for so long - qualities to aspire to.

In Thomas's keynote he proposed that we should specify almost nothing, since then we don't have the problems with specification. At first I thought that this was stupid - that surely just leads to ambiguity, and where computers use their intelligence, they get it wrong. But less specification does not necessarily lead to ambiguity, and computers can indeed outperform humans in many areas to fill in the gaps for us. An example: typed languages. Thomas implies that we shouldn't specify types in our programs, we should leave it up to the compiler. I couldn't agree more!

Instead I would say "specify as little as possible, but no less". Whenever we specify anything, it is because the computer can't fill in a gap. But all too often we tie our hands unnecessarily. As computers have gotten better (for example at generating machine code), we are able to leave out that part of the specification and work in high level language.

Often we imagine we really need that extra bit of complexity, when actually we don't. This is just "less is more". The more computers can automate, the less we need to specify things on them, until eventually they'll stop plaguing us. In fact, the thesis should be "HCI considered harmful", and the less interaction with computers we need, the better. I digress.

Both of the keynotes actually supported my work - Thomas by suggesting that a stave notation was truly beautiful, and Ron by suggesting that visual programming was a worthy goal. I clearly need some pictures of vines and horses around my visual programs, on a parchment backdrop. I think Ron actually said this because he believed that's what we wanted to hear, even though many researchers have given up on the idea as too problematic.

It was very difficult to get people really engaged about visual programming. People were always mildly positive and interested in my work, but I met nobody who was passionate. Talk went well - I always come across a bit nervous at first which is why it is good to make myself talk since I actually enjoy it. No tough questions - Erwig asked whether the + operator was overspecified since it is symmetric. My answer is that this was a problem in text as well, which was completely the wrong answer. The + operator is not symmetric, e.g. on strings. People were positive that someone was actually working on VP still, and I think people bought my argument that the benefits of sequence diagrams could be transferred into programming.

If I really wanted to take Visula forward I would have to round-trip to a language like Java. The problem is that the languages are quite different and it would be like trying to stick a square peg into a round hole. We shall see.

My other motive for going to this conference is that I really want to get into spreadsheet "programming". A friend of mine currently does business modelling on huge spreadsheets, and wants a better way. Nothing at the conference came close to solving the problem of managing large spreadsheets, so there is definitely an opportunity to develop something.

Sunday, September 03, 2006

Human-centric computing??

I am now ready to go to the conference (VL/HCC 2006) and present. I have managed to squeeze the talk down into 12 minutes.

I have come to the conclusion that nobody cares about visual programming. This is evident from the complete lack of implementations. It was also clear that only one of my reviewers (of the three) knew anything about visual programming at all. None of them bothered to look at the website or try to use the software.

To make matters worse, the next conference (VL/HCC 2007) does not even include visual programming or languages on its list of topics at all! I can clearly see that they will rename the conference to HCC'08 in the future.

It has been argued that visual programming is more suitable for end-users and novices. That is fine, except for the problem that most programming in this world is done by experienced programmers. It depends on what you mean by "programming".

The VL-part of VL/HCC hasn't happened because the world has so much invested in textual languages. After a while, people concluded that VPLs don't work. This is not true, they just haven't been available. When I set out to create Visula, my starting thesis was that it would suck and wouldn't work. But guess what - with a bit of experimentation I created a system that worked absolutely fine.

If you think about it, nobody is actually going to create new VPLs. Final-year project students aren't going to because the task is much too large. University research projects aren't because researchers aren't usually that experienced at programming, and nobody could get funding for such a project. Companies aren't going to create VPLs because there is no money in writing new programming languages. That leaves the open-source community, who do it for the love of it.

It will be interesting to see if my opinion about human-centric computing goes up or down after this conference. I will also see if I can meet anybody who actually gives a damn about visual programming. It was such a hot-topic 10-15 years ago, now where has it gone?