Valkka
1.6.1
OpenSource Video Management
|
Describes the stack structure and fifo behaviour for a FrameFifo. More...
#include <framefifo.h>
Public Member Functions | |
FrameFifoContext (int n_basic, int n_avpkt, int n_avframe, int n_yuvpbo, int n_setup, int n_signal, bool flush_when_full) | |
FrameFifoContext (int n_signal) | |
Public Attributes | |
int | n_basic |
data at payload // <pyapi> | |
int | n_avpkt |
data at ffmpeg avpkt // <pyapi> | |
int | n_avframe |
data at ffmpeg av_frame and ffmpeg av_codec_context // <pyapi> | |
int | n_yuvpbo |
data at yuvpbo struct // <pyapi> | |
int | n_setup |
setup data // <pyapi> | |
int | n_signal |
signal to AVThread or OpenGLThread // <pyapi> | |
int | n_marker |
marks start/end of frame emission. defaults to n_signal // <pyapi> | |
bool | flush_when_full |
Flush when filled // <pyapi> | |
Describes the stack structure and fifo behaviour for a FrameFifo.
libValkka pre-reserves memory for all incoming data, decoded frames, etc. say, we reserve 50 BasicFrame objects into a stack for the incoming H264 frames from the camera. Once that H264 is successively decoded, then it is recycled and used again. Depending on your number of cameras, you might need more. On the other hand requiring a lots of pre-reserved frames eventually means that you are not able to decode fast enought (and return the frames back to the stack)
There is a stack of frames in LiveThread (that's receiving raw H264) but also in AVThread instances: LiveThread uses frames from it's stack to save the H264 data. AVThread then uses a frame from it's own stack to create a copy of that H264, etc.