[maemo-developers] adaptation of Extras QA hurdles
From: Thomas Perl th.perl at gmail.comDate: Thu Jan 27 10:24:55 EET 2011
- Previous message: adaptation of Extras QA hurdles
- Next message: adaptation of Extras QA hurdles
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Hi! 2011/1/27 Riku Voipio <riku.voipio at nokia.com>: > On 01/25/2011 05:24 PM, ext Andrew Flegg wrote: >> On Tue, Jan 25, 2011 at 15:03, Felipe Crochik <felipe at crochik.com> wrote: >>> A new version of an existing application should have a lower barrier to >>> promotion than a new application. > >> It's obvious; but even a minor change can have an enormous impact. >> However, it's unlikely that the package (once optified) will become >> non-optified and numerous other things (descriptions will only bitrot >> when major new features are introduced; icons won't disappear; bug >> trackers are fairly persistent etc.) > > For new versions, the set of checks should still be shorter and less > votes needed. Basically the only check to be done is "has the new > version any regressions compared to the version already in extras". So a rouge developer could just upload a well-behaving tool at the first time (where a more thorough check takes place and more votes are needed), and then upload another release later on with some dangerous code which sets the device on fire, and this update does not need so any votes? It does not even have to be a rouge developer - just think about the Debian SSL vulnerability. I'd consider a normal update as important as a new release. If we were doing some kind of fast-tracking for minor updates, we should at least show a debdiff of both source packages somewhere in the web UI, so that a fellow developer can check what source code changes are in there, and have a better understanding of the changes (it should be obvious from the source diff what changes there are, and if they are minor and really just aim to fix a bug). My suggestion would be to add a more "formal" definition of the testing criteria + some web UI for entering the test results (which are then saved) - with easy-to-understand instructions that even a non-developer can follow (so we hopefully gain some manpower in the testing team from non-developers who also want to contribute, but not via code). Everything that can be checked reliably by scripts (i.e. optification) should be checked by scripts, and should not be included in the "what a tester has to look at" list. HTH. Thomas
- Previous message: adaptation of Extras QA hurdles
- Next message: adaptation of Extras QA hurdles
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]