The CITCON 2008 (Europe) is now finished and so it's time to give some feedback about this event. First for those who are not aware of the Open Space Discussion, let's see a small resume of the principle of the CITCON.
The CITCON has been brought by Jeffrey Fredrick and Paul Julius that have created the OIF (Open Information Foundation) in 1996 I think.
Most of the participants arrive on friday evening and are welcome by the two organizers that will launch the conference. Then a microphone is given to all participants (more or less 90 this year) that will briefly present themselves and explain why do they come to the CITCON. Always interesting to be able to target a bit the audience, to have a proportion between people coming to share experience, those that want to learn, the SCRUM or XP guys, the Java or .NET "adepts".
After that, this is the subject presentation time. Jeffrey and Paul will distribute some post-its so anyone can write a subject they want to speak about. Those who have a subject in mind will thus come in the center of the "arena" and will briefly explain its topic (maybe 40 have been proposed this year).
The conference introduction day is so almost finished. Now anyone is free to go and vote on the subject they are interested in. The organizers will then regroup the topics, elect the subjets with the more votes, and assign them a schedule and a room. And so on the saturday, each session will turn in a free discussion where anyone is free to join (or to leave) and where the guy(s) that have proposed this subject are the facilitaters of the talk.
A real return of experience !
This year I went to the CITCON with two colleagues : Norman Deschauwer and Thierry Thoua. Thierry should also post soon some notes about the sessions he has seen. Follow his blog!
But let's come back to the sessions I have seen.
- At 10h00, there had been an interesting talk about the role of a functional tester. The guys that have proposed this subject were not there but Sai Patel took the animator role to speak about his experience. A bit less than 20 people were present with (among others) Eric Jimmink, Freddy Mallet, Cirilo Wortel, Paul Julius, Norman Deschauwer and Thierry Thoua.
Very interesting to compare his implementation of agile methodology with mine. Many questions and comparisons, as its project size and evolution was very similar to mine, but with different streangth and weaknesses ! He was explaining that his team grew (from 3 persones to 10) and that with the time there had been a specialization of the testers. He began with developers volounteers and is now working with dedicated testers. His main problem ? Working with iterations of 6 weeks where the testers can only interect during the two last weeks. We so came back with our idea of integrating the testers during the development phase (and also doing some pairs between functional analysts and developers) and Cirilo (he is working as a functional tester) continued by explaining how he was testing each task after it was finished, simply by getting the sources locally. Interesting comparison with my project as we are now thinking to introduce a kanban to see more precisely where all task is located.
Paul Julius also spoke a bit of the best XP team he has been working with : a team where all the 6 XP roles had been clearly identified (client, tester, manager, developer, tracker and coach) but where all the developers (let's understood here collaborator or employee) where passing from one role to the other.
Someone (sorry guys, I could not remember the name of all of you...) also presented the finger chart where we can see the evolution / trend between what is analysed, developped, tested, ...
One conclusion sentence ? The tester is a proxy to the client.
- At 11h15, we had a talk about the Data used for development and testing. Some presents ? Douglas Squirrel and Alexander Snaps. Three subjects have been discussed :
The talk was quite interesting even if presenting some fixed positions stating that we should not need (or very rarely) the production data to test the application. Alexander explained that he having in his database a table storing the version number of the database, and the list of all changesets that have been applied from one version to another one, in order to be able to easily compare the state of two production databases.
- What is the source of data during the development phase (ie, can we / shall we use the production data)
- Shall we create the database from scratch or shall we have some change script to migrate from one version to another one ?
- What about the DB scripts while branching ?
We also spoke of Unitils, an open source tool that is dealing with testing in general and DB testing in particular.
- At 14h00, Freddy Mallet and myself, we had proposed a subject about the code quality and the metric use. Among other, were present Peter Camfield, Rene Medellin and John Van Krieken. Freddy wanted to briefly present one of his company sofware "Sonar" focusing on code quality, while my goal was more like a brainstorming, stating that a lot of metrics exist (for client, developpers, project managers, ...) but some only are interesting (for a developper / architect point of view) and can provide a quick feedback about the code quality.
Peter Camfield has briefly presented the six indicators he was using in his project, among the test coverage and the class coupling. We had a debate whether the code coverage was a good indicator or not, stating that if the code coverage is low, it's clearly a warning, but when it's high, we cannnot really conclude as we may only pass in the code, without doing any assertions for example. We also put it in relation with the Cyclomatic complexity of a method to try to give a further interpretation of the code coverage.
We also spoke about some tools like FxCop and Gendarme (for static code analysis : syntaxic and naming rules, ...), Simian (for the code redundancy), NDepend and its CQL (query language), Crap4J (that has been presented by Jeffrey last year) and another tool to do some mutation tests (can not remember the name. Was it Nester?)
We spoke also of other metrics like burndown charts, some ratio time (time for correcting defects versus the time for new features for instance), the number of commits of a developper, the number of build failures, the build length, the number of "TODO", "FIXME" (or any related tag we can find in the code).
We finally spoke about systems like "Team City" or "Gated" to do some deferred check-in.
- At 15h15, I have assited partly to the talk of Cirilo Wortel and Jamie Dobson that spoke about the functional testing.
They mainly spoke about FIT (Framework of Integration Testing) and FITnesss to explain how these tools can help doing functional testing. There has also been a long methodology debate about the iterations in an agile project and whether or not we should allow scope changes during an iteration.
We have also discussed of some testing tools : some working inside of a browser like Watir (and FireWatir or SafariWatir for Firefox and Safari), Watin and Selenium. And some working without any browser like Celerity, HtmlUnit and JsUnit Server.
I also spoke a bit of a difficulty we encounter in .NET as the framework will "change" the ids of the control and how we could, via a static ".js" file, work with the controls of the page, knowing only part of their name.
At least, some photos of the event to share with you !
A conclusion ?
What else to say about the event ? Like last year, we had very interesting talk there, and this is an highly motivating event.
So let's meet again next year (in Barcelona ? Praga ?) and in the meanwhile let's try to share part of this motivation and enthousiasm to our team !
Thanks guys !