<div dir="ltr"><div class="gmail_quote">
<div dir="ltr">
<div class="gmail_quote">
<div dir="ltr">Hello everybody, and pleased to join this mailing list.<br><br>I'm actually trying to develop a video processing application for the nokia (face detection & expression recognition).<br><br>I got that stuff working on a PC with a webcam. (probably not yet optimised enough for the nokia, but that's the next part !)<br>
I configured maemo environment (using diablo) and finally got the example_camera.c from maemo_example working.<br>So I used the structure of this one for my application. But I'm not sure yet how the pipeline thing works, I wasn't able to get any result :(<br>
<br>Here is my pipeline :<br><br><br><br>static gboolean initialize_pipeline(AppData *appdata,<br> int *argc, char ***argv)<br>{<br> GstElement *pipeline, *camera_src, *screen_sink;<br> GstElement *screen_queue;<br>
GstElement *csp_filter, *tee;<br> GstCaps *caps;<br> GstBus *bus;<br><br><br> /* Initialize Gstreamer */<br> gst_init(argc, argv);<br> <br> /* Create pipeline and attach a callback to it's<br> * message bus */<br>
pipeline = gst_pipeline_new("test-camera");<br><br> bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));<br> gst_bus_add_watch(bus, (GstBusFunc)bus_callback, appdata);<br> gst_object_unref(GST_OBJECT(bus));<br>
<br> /* Save pipeline to the AppData structure */<br> appdata->pipeline = pipeline;<br> <br> /* Create elements */<br> /* Camera video stream comes from a Video4Linux driver */<br> camera_src = gst_element_factory_make(VIDEO_SRC, "camera_src");<br>
/* Colorspace filter is needed to make sure that sinks understands<br> * the stream coming from the camera */<br> csp_filter = gst_element_factory_make("ffmpegcolorspace", "csp_filter");<br>
/* Tee that copies the stream to multiple outputs */<br> tee = gst_element_factory_make("tee", "tee");<br> /* Queue creates new thread for the stream */<br> screen_queue = gst_element_factory_make("queue", "screen_queue");<br>
/* Sink that shows the image on screen. Xephyr doesn't support XVideo<br> * extension, so it needs to use ximagesink, but the device uses<br> * xvimagesink */<br> screen_sink = gst_element_factory_make(VIDEO_SINK, "screen_sink");<br>
<br><br> /* Check that elements are correctly initialized */<br> if(!(pipeline && camera_src && screen_sink && csp_filter && screen_queue))<br> {<br> g_critical("Couldn't create pipeline elements");<br>
return FALSE;<br> }<br> <br> /* Add elements to the pipeline. This has to be done prior to<br> * linking them */<br> gst_bin_add_many(GST_BIN(pipeline), camera_src, csp_filter,<br> tee, screen_queue, screen_sink, NULL);<br>
<br> /* Specify what kind of video is wanted from the camera */<br> caps = gst_caps_new_simple("video/x-raw-rgb",<br> "width", G_TYPE_INT, 640,<br> "height", G_TYPE_INT, 480,<br>
"framerate", GST_TYPE_FRACTION, 25, 1,<br> NULL);<br> <br><br> /* Link the camera source and colorspace filter using capabilities<br> * specified */<br> if(!gst_element_link_filtered(camera_src, csp_filter, caps))<br>
{<br> return FALSE;<br> }<br> gst_caps_unref(caps);<br> <br> /* Connect Colorspace Filter -> Tee -> Screen Queue -> Screen Sink<br> * This finalizes the initialization of the screen-part of the pipeline */<br>
if(!gst_element_link_many(csp_filter, tee, screen_queue, screen_sink, NULL))<br> {<br> return FALSE;<br> }<br><br> /* gdkpixbuf requires 8 bits per sample which is 24 bits per<br> * pixel */<br> caps = gst_caps_new_simple("video/x-raw-rgb",<br>
"width", G_TYPE_INT, 640,<br> "height", G_TYPE_INT, 480,<br> "bpp", G_TYPE_INT, 24,<br> "depth", G_TYPE_INT, 24,<br> NULL);<br>
<br><br><br>// PROCESSING PART //<br><br><br><br> int x, y, expression;<br> double t;<br><br><br> // facedetected contain the face detected by viola and jones detector, original size<br> IplImage *facedetected = NULL;<br>
// faceresized contain the detected face scaled to 108*147<br> IplImage *faceresized = cvCreateImage(cvSize(108,147),IPL_DEPTH_8U , 1);<br> // faceresized2 contain the face in faceresized with 2 pixels black borders around<br>
IplImage *faceresized2 = cvCreateImage(cvSize(112,151),IPL_DEPTH_8U , 1);<br><br><br><br><br> // Plane that will hold current frame data<br> FLY_U8PlaneType *pcurrYPlane;<br> pcurrYPlane=(FLY_U8PlaneType *) malloc (sizeof(FLY_U8PlaneType));<br>
<br> // allocating space for image<br> pcurrYPlane->Width = 640;<br> pcurrYPlane->Height = 480;<br> pcurrYPlane->Stride = 640;<br> pcurrYPlane->Buffer = (unsigned char*)calloc(IMAGE_WIDTH*IMAGE_HEIGHT,sizeof(unsigned char *));<br>
<br><br><br><br><br><br>// Here is the image processing part<br><br><br><br><br><br> /* As soon as screen is exposed, window ID will be advised to the sink */<br> g_signal_connect(appdata->screen, "expose-event", G_CALLBACK(expose_cb),<br>
screen_sink);<br> <br><br><br><br> gst_element_set_state(pipeline, GST_STATE_PAUSED); <br><br> return TRUE;<br>}<br><br><br><br><br>First, I'd like to know if the way I'm doing this is right. Should the processing part be in the pipeline initialisation function ? Or where should I put ? I need to process the maximum number of frames from the camera that the power of the arm processor permit.<br>
<br>My other problem is that I need to modify the buffer that will be displayed, to draw rectangles over the faces for instance. <br>So I'd like to know how to access the buffer from the video_sink element, how is it ordered and how to modify the values of pixels.<br>
<br><br>I hope my questions are understandable, I'm not really used to object languages and don't really get every aspect of gstreamer.<br>Thanks a lot for your attention, and have a nice week end !<br><br>Bruno<br>
<br></div><br></div><br></div></div><br></div>