[maemo-developers] Testing marathon & Q&A Feedback

From: Quim Gil quim.gil at nokia.com
Date: Mon Nov 2 12:08:14 EET 2009

ext Ville Reijonen wrote:
> Is the approval/karma process going to be actually a popularity contest? 
> Popular titles get votes fast, a niche software will not. Unless there 
> is regular testing marathons, I think this will be an issue.

I believe this problem will be solved as soon as we have the first
thousands of real users.

Very popular apps will get a critical mass of testers ready to process
new releases.

Very specialized apps... too. You only need a dozen of committed users.

Then it  ight be that new fresh apps take a while to get enough testers
and it might be that the first jump to Extras is difficult. Having a
brigade of mercenary testers might help, sure.

But if after some time an app still hasn't got enough interested to
atract a dozen of regular testers... well, it might be that such project
has other deeper problems due to lack of user interest.


>> We need more people doing this, so the effort could be split (e.g. 
>> tester group A checks one app while tester group B checks another).
> 
> One can not test everything on virtual machine. For good testing, one 
> should have a device with a default setup so the effects can be observed 
> - ie. energy usage, system configuration changes, compability, etc. Good 
> testing takes quite a lot of time.

But the time should be better split. The basic feedback I got from busy
Nokia employees with devices, professional experience and willingness to
help is that the thumbs up/down should be applied to each QA criteria.

If you are asking me to give thumbs up / down on 10 criteria the
realistic options are:

- I won't rate.
- I will rate ignoring some QA criteria.
- I will rate after investing a lot of time and skills.

If instead the rating would be done by QA criteria then it would be
easier to see whet are the parts missing testing. Someone might find
interesting to test optification because it's simple for him, and this
will save the testing to another one more interested on crashes, or
legal aspects, usability, etc.

I believe this idea needs further thinking. This might be a big part of
the solution.


> Additionally, I would think that one does not want to just put any 
> packages for testing on personal device with personal data. You might 
> accept the risk for software you like (and trust for some reason), but 
> not for all random packages -> popularity contest. As comparison, 
> nothing should never be tested on a server in production.

The N900 is not a server. There is not lack of people out there ready to
 install testing versions of Debian, Ubuntu, Windows, Firefox and long
etc for their primary use. For instance, that has been my case for many
years and for something like my laptop = 100 times more critical than my
mobile device.


> Maybe this manna/karma thing has been though out, but somehow if feel 
> that the research for similar systems was not done before rolling it 
> out. There seems to be too many holes, and I just though a while. Every 
> serious linux distribution has some kind of QA system. Most of the 
> software makers have QA systems. Do not invent the wheel again..

URLs proving your point, please?

Not every Linux distribution has the consumer focus of Maemo 5 / N900
and definitely not every Linux distribution needs to handle specific
problems of mobile devices. Besides, not every Linux distribution pay
actually much attention to the application packages being pushed by its
contributors and it takes only 10 random installs from your distro to
find out.

So no, there are not that many references for community QA models. I
believe the Extras QA process is highly innovative and it has its
chances of serving a useful purpose to the rest of the free software
community.

-- 
Quim Gil
open source advocate
Maemo Devices @ Nokia
More information about the maemo-developers mailing list