I had to laugh at Liming’s latest micro-blog posting, Why are papers in top conferences very boring (these days)? It’s funny, but I’m not sure I entirely agree – I think top conferences do have interesting papers. Liming is saying interesting ideas won’t necessarily have had time to be well validated, and by the time you have validated and published your idea in a top conference, it’s no longer new (and interesting). However, I don’t want to see completely unvalidated ideas. Ideas are cheap. I want to see ideas that are realisable, and whose value has been described and justified somehow.
To the extent that Liming’s wry diagram is true, I think it’s more true of journals than conferences. In most academic disciplines, journals are regarded as the “proper” place to publish significant results. Computer science is different – top conferences in computer science (and software engineering) can be more important than journals. Citeseer statistics show most of the highest impact compsci venues are conferences, and even some workshops have more impact that some top journals! But (or perhaps because!) in computer science, journals have longer review and publication lead times than conferences, so the results there can be more out-of-date and so less interesting. (That is a bit odd when you think about it – journals are published several times a year, whereas each conference happens at most once a year – surely journals should be able to be more responsive than conferences in publishing new results?!)
Anyway, it makes me wonder how Ricky’s citemine system would work in the conference milieu. I guess for maximum market efficiency in citemine, the evaluation for new papers should take place in public. So, no workshop or conference would ever have “new” results – everything that made it through “review” (weighted average market price over the period since the last conference greater than some threshold for papers within some discipline boundary?) would have been published for the best part of a year. Conferences would be more like the Oscars – glorifying new exciting productions – rather than a way of learning about recent results. Maybe that’s OK – I think the greatest value of conferences is networking and nuance, and you would still get that at a Computer Science discipline’s “Academy Awards”. But these would be very different events, and norms of academic precedence would need to be re-conceptualised.
NICTA’s recent Techfest in Sydney saw a flurry of news around the announcement of a significant research achievement- the formal verification of the seL4 microkernel. The team developed a mathematical proof of the functional correctness of the microkernel down to the level of the C source code.
The achievement is important for two reasons. Firstly, it makes it possible to prove code-level functional correctness of whole computer systems based on the L4 microkernel. This means computer systems can carry a new kind of assurance, supporting strong arguments for safety and security for high-integrity software systems.
Secondly, it makes the creation of these proofs more feasible in practice. It should now be possible to formally verify properties of whole systems where only the key parts of the system are formally verified. This is critical for the practical application of formal verification. The verification of the L4 microkernel took more than 20 person years of effort to verify just 7500 lines of C source code. Modern embedded computer systems can have millions of lines of source – it’s not practical to formally verify all the code in such systems at these levels of productivity.
However, you don’t need to verify all the code! L4 provides rigorous separation between processes running in the microkernel, and that separation is guaranteed by the recent proof. If you can isolate the safety-critical or security-critical parts of an entire system to one small formally verified component running in an L4 process, it should be possible to lift the guarantees for that component to the whole system, even if the other components in the system haven’t been verified.
That is the challenge for a new project at NICTA – Trustworthy Embedded Systems. The plan is to develop technologies to support the creation and verification of entire systems running on top of the microkernel. This is a large project involving several NICTA labs and researchers from many disciplines (operating systems, formal methods, software architecture). I’m part-allocated to the project for the next few years. My PhD was in formal methods, and this is really the first time I’ve dipped my toe back into that area for the last decade. However, I won’t be doing too much theorem proving myself – my focus in the project will instead largely be on other software engineering issues in this context, such as configuration management, and how to use the component architecture to support product line development.
WICSA was fun. I usually find the most I can hope for in a conference is 1 or 2 papers that are really interesting, but I think WICSA cleared 5, so it was well worthwhile. What I particularly enjoy about conferences is hearing how people verbally describe the ideas and challenges in the field. You can get so much more nuance and emphasis from hearing people talk about their research, compared to just reading papers.
A great example was the final keynote for the conference, by Alexander Wolf. He covered reflections on his personal history working in software architecture, but as one of the “fathers” of the field, his talk was also a history of early software architecture research. It was fun to play spot the co-authors in the audience and also among other acquaintances.
He talked about the importance of simulation and experimentation for architecture, and called for more work to be done in the area. At NICTA, Jenny Liu and Paul Brebner have been leading work in these areas, particularly for performance analysis of enterprise architectures. They’ve been getting huge interest from industry. It’s a very promising approach and I can support the observation that simulation and experimentation are critically important to the discipline of software architecture.
Alexander Wolf was also previously involved with Software Configuration Management research, which is an interest of mine. He didn’t really elaborate on that line of work, but he did mention a paper of his discussing the relatedness of software architecture and configuration management. I think there’s still a lot more that can be said in this area, particularly concerning architecture evolution.
Yesterday WICSA 2009 finished. There were a number of interesting talks over the three days of the conference. One was by Richard Taylor on Architectural Styles for Runtime Software Adaptation. He was discussing a framework (BASE) for comparing approaches to dynamic runtime adaptation. The model classifies how various architectural styles deal with Behavior, Asynchrony, State, and Execution Context for adaptation.
One of the frameworks being analysed with BASE was CREST – Computational REST. In CREST, pieces of computation are represented as URLs and can be moved around the web just as static content is. Richard gave a demo of CREST in action – showing pieces of independent computation running and serving dynamic content to multiple distributed browsers. It certainly had a “wow” factor. It reminded me very strongly of the Google Wave demo. But CREST is a more general architecture – it’s not committed to the threaded content model that’s deeply built into Google Wave. Could you reimplement Google Wave on top of the CREST framework? It looks plausible, and it might also help you create and share a much richer variety of dynamic content – to put yourself ahead of the Wave (pun intended).
I had a few questions (some of which were prompted by discussions with Liming Zhu) but I didn’t get a chance to pin down Richard after the talk…
The first question is about CREST, but not BASE. We can observe that REST is “broken” on the web. For example, cookies aren’t part of (and violate!) the REST principles, but they are nonetheless essential to the workings of the Web. That’s fine – pragmatics will almost always get in the way of a naive realisation of an abstract model. So my question is – how (or if?) does CREST need to be “broken” for it to be workable?
My second set of questions is about the BASE framework discussed in the paper. What limitations do the various architectural styles carry on the scope of adaptation? How do you get assurance about invariant functionality? Why doesn’t BASE consider security? Dynamic adaptation is great, but not everything will be dynamically variable, and you probably want to know that some functionality won’t vary, and won’t be subverted at runtime. How do the various architectural styles enable that?