[maemo-developers] video processing on N810 using gstreamer

From: Bruno botte.pub at gmail.com
Date: Sat Aug 16 15:09:02 EEST 2008
  Hello everybody, and pleased to join this mailing list.

I'm actually trying to develop a video processing application for the nokia
(face detection & expression recognition).

I got that stuff working on a PC with a webcam. (probably not yet optimised
enough for the nokia, but that's the next part !)
I configured maemo environment (using diablo) and finally got the
example_camera.c from maemo_example working.
So I used the structure of this one for my application. But I'm not sure yet
how the pipeline thing works, I wasn't able to get any result :(

Here is my pipeline :



static gboolean initialize_pipeline(AppData *appdata,
        int *argc, char ***argv)
{
    GstElement *pipeline, *camera_src, *screen_sink;
    GstElement *screen_queue;
    GstElement *csp_filter, *tee;
    GstCaps *caps;
    GstBus *bus;


    /* Initialize Gstreamer */
    gst_init(argc, argv);

    /* Create pipeline and attach a callback to it's
     * message bus */
    pipeline = gst_pipeline_new("test-camera");

    bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));
    gst_bus_add_watch(bus, (GstBusFunc)bus_callback, appdata);
    gst_object_unref(GST_OBJECT(bus));

    /* Save pipeline to the AppData structure */
    appdata->pipeline = pipeline;

    /* Create elements */
    /* Camera video stream comes from a Video4Linux driver */
    camera_src = gst_element_factory_make(VIDEO_SRC, "camera_src");
    /* Colorspace filter is needed to make sure that sinks understands
     * the stream coming from the camera */
    csp_filter = gst_element_factory_make("ffmpegcolorspace", "csp_filter");
    /* Tee that copies the stream to multiple outputs */
    tee = gst_element_factory_make("tee", "tee");
    /* Queue creates new thread for the stream */
    screen_queue = gst_element_factory_make("queue", "screen_queue");
    /* Sink that shows the image on screen. Xephyr doesn't support XVideo
     * extension, so it needs to use ximagesink, but the device uses
     * xvimagesink */
    screen_sink = gst_element_factory_make(VIDEO_SINK, "screen_sink");


    /* Check that elements are correctly initialized */
    if(!(pipeline && camera_src && screen_sink && csp_filter &&
screen_queue))
    {
        g_critical("Couldn't create pipeline elements");
        return FALSE;
    }

    /* Add elements to the pipeline. This has to be done prior to
     * linking them */
    gst_bin_add_many(GST_BIN(pipeline), camera_src, csp_filter,
            tee, screen_queue, screen_sink, NULL);

    /* Specify what kind of video is wanted from the camera */
    caps = gst_caps_new_simple("video/x-raw-rgb",
            "width", G_TYPE_INT, 640,
            "height", G_TYPE_INT, 480,
            "framerate", GST_TYPE_FRACTION, 25, 1,
            NULL);


    /* Link the camera source and colorspace filter using capabilities
     * specified */
    if(!gst_element_link_filtered(camera_src, csp_filter, caps))
    {
        return FALSE;
    }
    gst_caps_unref(caps);

    /* Connect Colorspace Filter -> Tee -> Screen Queue -> Screen Sink
     * This finalizes the initialization of the screen-part of the pipeline
*/
    if(!gst_element_link_many(csp_filter, tee, screen_queue, screen_sink,
NULL))
    {
        return FALSE;
    }

    /* gdkpixbuf requires 8 bits per sample which is 24 bits per
     * pixel */
    caps = gst_caps_new_simple("video/x-raw-rgb",
            "width", G_TYPE_INT, 640,
            "height", G_TYPE_INT, 480,
            "bpp", G_TYPE_INT, 24,
            "depth", G_TYPE_INT, 24,
            NULL);



// PROCESSING PART //



    int x, y, expression;
    double t;


    // facedetected contain the face detected by viola and jones detector,
original size
    IplImage *facedetected   = NULL;
    // faceresized contain the detected face scaled to 108*147
    IplImage *faceresized = cvCreateImage(cvSize(108,147),IPL_DEPTH_8U , 1);
    // faceresized2 contain the face in faceresized with 2 pixels black
borders around
    IplImage *faceresized2 = cvCreateImage(cvSize(112,151),IPL_DEPTH_8U ,
1);




    // Plane that will hold current frame data
    FLY_U8PlaneType    *pcurrYPlane;
    pcurrYPlane=(FLY_U8PlaneType *) malloc (sizeof(FLY_U8PlaneType));

    // allocating space for image
    pcurrYPlane->Width  = 640;
    pcurrYPlane->Height = 480;
    pcurrYPlane->Stride = 640;
    pcurrYPlane->Buffer = (unsigned
char*)calloc(IMAGE_WIDTH*IMAGE_HEIGHT,sizeof(unsigned char *));






// Here is the image processing part





    /* As soon as screen is exposed, window ID will be advised to the sink
*/
    g_signal_connect(appdata->screen, "expose-event", G_CALLBACK(expose_cb),
             screen_sink);




    gst_element_set_state(pipeline, GST_STATE_PAUSED);

    return TRUE;
}




First, I'd like to know if the way I'm doing this is right. Should the
processing part be in the pipeline initialisation function ? Or where should
I put ? I need to process the maximum number of frames from the camera that
the power of the arm processor permit.

My other problem is that I need to modify the buffer that will be displayed,
to draw rectangles over the faces for instance.
So I'd like to know how to access the buffer from the video_sink element,
how is it ordered and how to modify the values of pixels.


I hope my questions are understandable, I'm not really used to object
languages and don't really get every aspect of gstreamer.
Thanks a lot for your attention, and have a nice week end !

Bruno
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.maemo.org/pipermail/maemo-developers/attachments/20080816/b70c6849/attachment.htm 
More information about the maemo-developers mailing list