[maemo-developers] QA process for "middleware" (libfoo, python-bar) packages: some ideas
From: Anderson Lizardo anderson.lizardo at openbossa.orgDate: Thu Dec 17 14:28:01 EET 2009
- Previous message: QA process for "middleware" (libfoo, python-bar) packages: some ideas
- Next message: QA process for "middleware" (libfoo, python-bar) packages: some ideas
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Wed, Dec 16, 2009 at 4:39 PM, Tim Teulings <rael at edge.ping.de> wrote: > Hello! > >> I believe we can improve on this area: to have safer and optimized > > Do we have a problem? Does testing applications instead of enablers make > us let problems go into extras that do not pass the extras criteria? I believe the application testing will get most of problems before packages hit extras. But there are some kind of problems (such as installation/removal problems) which I believe are not being covered by the QA criteria (and I think could have a help of automated tests). > >> Let me give some examples of such possible bugs: > >> * A new version of libfoo is uploaded with unexpected ABI (Application >> Binary Interface) changes. At the same time some application is >> compiled against it and it works fine, so the package (together with >> libfoo) is promoted to extras. OTOH, this new version libfoo does not >> play well with the existing packages in extras, which will break or >> even fail to start > > Yes, this might be a problem. Could we test this automatically by > checking missing symbols of binaries against offered symbols of > libraries (for hard linking errors). Not hard linking errors can only be > detected by testing all depending applications? While I've not seem a single mention of this problem so far (maybe because we don't upload updated library packages too often, AFAICT from my spare monitoring of the extras-cauldron mailing list), I think these kinds of "cross application" breaks are very possible as long as there is not a clear policy on how to handle enablers/middleware used by many applications. For the official Maemo libraries we trust the internal Nokia QA, but for community enablers (that includes all of PyMaemo stack), the QA is our own responsibility. Testing all depending applications, if done manually, might not be scalable at the long term, as the number of applications tend to grow. But I think it might be worth including on the QA crieteria some requirement to review depending applications if the reviewing application will also trigger a enabler update. > Yes. There were discussions in the past how to handle, manage, maintain > libraries that have multiple dependend applications with different > maintainers. I do not remember that a solution was found (besides "talk > to each other", "make a bug") As I said earlier, I think we need to come up with specific QA guidelines for common libraries/bindings, so that a library update due to application A does not break a application B that depends on that same library. For that we might start by creating a list of such libraries. "apt-cache" has the information needed to check dependencies shared by more than one application. >> * Require (or strongly recommend) *all* packages to have at least some >> sort of unit/functional testing. In fact, most packages imported from >> Debian has tests, but what usually happens is that they are disabled >> so the package can build on scratchbox. > > IMHO that does not solve above problems and such strong requirement will > possibly keep a number of libraries ot of the repository (including > mine). Possibly even once that are part of the platform? In fcat to > solve above problems this would mean that I do not have to test my > application but I must test in my application if all functions I call > from foreign code is available and does what I expect it does. Of course > if I would write tests for my library they would always pass and still > could break applications anytime. If I drop functions I will drop the > test, too. If I change the meaning of an function I will adapt the test, > too. Same goes for applications. You want to test interactions between > applications and libraries so you must have test cases for this > interaction. And while I apreciate automatic test suits I and most other > small problems cannot manage this because of lack of resources. I likely > find 90% of my bugs using application functionality tests much faster > (doing server development in my job things are different...). The "unit testing" is one approach (of many possible). Maybe they can be made optional (and some application might gain "bonus points" on QA if it has a good test coverage), but I think there should be at least some infrastructure that allows to run any available automated tests on the application, and collect the results, so the developer does not have to remember to run them before each upload. This would not "block" the upload, but the upload of a source package could trigger the automatic run of tests. >> * Have some way to run these tests automatically from the package >> sources when uploading packages to the autobuilder. >> * Exercise installation of packages (maybe using piuparts? See >> http://packages.debian.org/sid/piuparts), if possible on a real >> device. > > I think the maemo version of lintian does/will do such stuff but not by > installing but by checking package for known failures. A working > installation is not good enough. You would need to start the application > but how do you check that it works? We should solve easy problems first > and extending such mechanism possibly fixes/finds more problems faster? Not that I was thinking more about bugs on the installation/removal/upgrade stage itself, not on the application functionality. Installation/removal/upgrade bugs are better detected by actually installing packages on the target rootfs, because they involve running maintainer scripts, which might contain bugs. One such case I found where QA failed was on the "rootsh" package (I copied Faheem who maintains the package). I found that the rootsh version in the "extras" repository could not be removed using the Application Manager. So I tried on the command line, I noticed there was a syntax error on the postrm script (missing a "then"), preventing the removal of the package. I just noticed it looks like https://bugs.maemo.org/show_bug.cgi?id=6014, I will comment more there. >> * Test disk-full conditions, and randon errors during package >> installation of packages from Extras. > > Disk full on installation is a problem of the packaging mechnism and > normally not a problem of the package (if it does not run space using > scripts on its own during installation). For checking disk full > conditions on the application you must install it, run it and trigger > its writing functionality. To do this automatically is somewhere between > difficult and impossible. Well, I truly think "packaging problems" to be very important because they might render the device unusable or badly broken requiring a reflash. And in most cases it is caused by bugs in the postrm/prerm/preinst/postinst maintainer scripts. The disk-full condition just triggers these bugs more easily, so if we could at least simulate this condition during package installation, we might detect potential bugs in the installation process. > I would suggest to the tester to collect reoccuring testing failures > they have the feeling that could found automatically and contact the > build masters in such case (by filing an bug/enhacement request) - if > they are not doing this anyway already > I believe opening bugs with automation requests (if possible even with the automation script itself) is a nice idea. Regards, -- Anderson Lizardo OpenBossa Labs - INdT Manaus - Brazil
- Previous message: QA process for "middleware" (libfoo, python-bar) packages: some ideas
- Next message: QA process for "middleware" (libfoo, python-bar) packages: some ideas
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]