<span class="gmail_quote"></span><span class="gmail_quote"></span>Hello,<br><div><span class="e" id="q_11bd5e18ac8e7c9a_0"><br>I'm sorry, wanted to send the previous message to the list but I just sent it to Stefan.<br>
<br>So my VIDEO_ defines are :<br><br>#define VIDEO_SRC "v4l2src"<br>
#define VIDEO_SINK "xvimagesink"<br>
<br>I removed the second caps part, I pasted it from the original code I was using has an example but that has nothing to do in my pipeline ;)<br><br><br>But the problem don't come from this, actually my pipeline is working, I have a start/stop button, and when I press it, the video from the camera appears on the screen.<br>
<br>Now I'd like to run some code continuously, calculating a result with the current image buffer, and doing it with the next frame when it's done.<br><br>Bruno<br><br><br><div><span class="gmail_quote">2008/8/18, Stefan Kost <<a href="mailto:ensonic@hora-obscura.de" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">ensonic@hora-obscura.de</a>>:</span><div>
<span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Bruno schrieb:<br> <br>> Hi Steph,<br> ><br> > - By video_sink I meant screen_sink, which contain the current frame<br> > buffer from the camera.<br> > - The video/x-raw-rgb seems to configure the input of the sensor, I<br>
> just took it like this in the maemo camera example.<br> <br> <br>Please reply to the list. I am not mr. gstreamer support, right.<br> <br> Then please read my comments again. There is no element called<br> "screen_sink", so please also show that #define VIDEO_SINK ....<br>
2nd the first video/x-raw-rgb is for the camera format, but you define<br> another caps futher below, but not using it in the code snippet.<br> <br> Stefan<br> <br>><br> ><br> > Actually, I'm just trying to find a place where I can put my image<br>
> processing stuff so it can proceed the frames from the camera.<br> > I'd like that the program process one frame from the camera, ignore<br> > the others till it finishs image processing on it, and then process<br>
> the next one :<br> ><br> > * |Camera| |CSP | |Screen| |Screen| | Calculate |<br> > * |src | -> |Filter| -> |queue | -> | sink | -><br> > |expression| -> Display<br>
><br> ><br> > Here is my pipeline with some useless stuff removed :<br> ><br> > static gboolean initialize_pipeline(AppData *appdata,<br> > int *argc, char ***argv)<br> > {<br> > GstElement *pipeline, *camera_src, *screen_sink;<br>
> GstElement *screen_queue;<br> > GstElement *csp_filter;<br> > GstCaps *caps;<br> > GstBus *bus;<br> ><br> ><br> > /* Initialize Gstreamer */<br> > gst_init(argc, argv);<br>
><br> > /* Create pipeline and attach a callback to it's<br> > * message bus */<br> > pipeline = gst_pipeline_new("test-camera");<br> ><br> > bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));<br>
> gst_bus_add_watch(bus, (GstBusFunc)bus_callback, appdata);<br> > gst_object_unref(GST_OBJECT(bus));<br> ><br> > /* Save pipeline to the AppData structure */<br> > appdata->pipeline = pipeline;<br>
><br> > /* Create elements */<br> > /* Camera video stream comes from a Video4Linux driver */<br> > camera_src = gst_element_factory_make(VIDEO_SRC, "camera_src");<br> > /* Colorspace filter is needed to make sure that sinks understands<br>
> * the stream coming from the camera */<br> > csp_filter = gst_element_factory_make("ffmpegcolorspace",<br> > "csp_filter");<br> > /* Queue creates new thread for the stream */<br>
> screen_queue = gst_element_factory_make("queue", "screen_queue");<br> > /* Sink that shows the image on screen. Xephyr doesn't support XVideo<br> > * extension, so it needs to use ximagesink, but the device uses<br>
> * xvimagesink */<br> > screen_sink = gst_element_factory_make(VIDEO_SINK, "screen_sink");<br> ><br> ><br> > /* Check that elements are correctly initialized */<br> > if(!(pipeline && camera_src && screen_sink && csp_filter &&<br>
> screen_queue))<br> > {<br> > g_critical("Couldn't create pipeline elements");<br> > return FALSE;<br> > }<br> ><br> ><br> > /* Add elements to the pipeline. This has to be done prior to<br>
> * linking them */<br> > gst_bin_add_many(GST_BIN(pipeline), camera_src, csp_filter,<br> > screen_queue, screen_sink, NULL);<br> ><br> > /* Specify what kind of video is wanted from the camera */<br>
> caps = gst_caps_new_simple("video/x-raw-rgb",<br> > "width", G_TYPE_INT, 640,<br> > "height", G_TYPE_INT, 480,<br> > "framerate", GST_TYPE_FRACTION, 25, 1,<br>
> NULL);<br> ><br> ><br> > /* Link the camera source and colorspace filter using capabilities<br> > * specified */<br> > if(!gst_element_link_filtered(camera_src, csp_filter, caps))<br>
> {<br> > return FALSE;<br> > }<br> > gst_caps_unref(caps);<br> ><br> > /* Connect Colorspace Filter -> Tee -> Screen Queue -> Screen Sink<br> > * This finalizes the initialization of the screen-part of the<br>
> pipeline */<br> > if(!gst_element_link_many(csp_filter, screen_queue, screen_sink,<br> > NULL))<br> > {<br> > return FALSE;<br> > }<br> ><br> > /* gdkpixbuf requires 8 bits per sample which is 24 bits per<br>
> * pixel */<br> > caps = gst_caps_new_simple("video/x-raw-rgb",<br> > "width", G_TYPE_INT, 640,<br> > "height", G_TYPE_INT, 480,<br> > "bpp", G_TYPE_INT, 24,<br>
> "depth", G_TYPE_INT, 24,<br> > NULL);<br> ><br> ><br> ><br> ><br> ><br> > /* As soon as screen is exposed, window ID will be advised to the<br> > sink */<br>
> g_signal_connect(appdata->screen, "expose-event",<br> > G_CALLBACK(expose_cb),<br> > screen_sink);<br> ><br> ><br> ><br> ><br> > gst_element_set_state(pipeline, GST_STATE_PAUSED);<br>
><br> > return TRUE;<br> > }<br> ><br> ><br> > I tried to have a printf("hello") on each video frame. I put the<br> > printf in the pipeline initialisation, in the main, and in the expose<br>
> cb function which is :<br> ><br> > /* Callback to be called when the screen-widget is exposed */<br> > static gboolean expose_cb(GtkWidget * widget, GdkEventExpose * event,<br> > gpointer data)<br> > {<br>
> /* Tell the xvimagesink/ximagesink the x-window-id of the screen<br> > * widget in which the video is shown. After this the video<br> > * is shown in the correct widget */<br> > gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(data),<br>
> GDK_WINDOW_XWINDOW(widget->window));<br> > return FALSE;<br> > }<br> ><br> > And no success. the printf is showed only one when I start the<br> > pipeline. (I have a button to start and stop it)<br>
><br> > Maybe I can make a function with the processing part in it, and tell<br> > the pipeline to run it with a g_signal_connect call.<br> ><br> > Thanks in advance,<br> > Bruno<br> <br> </blockquote>
</span></div></div>
</span></div>