Top |
struct | GstVideoAlignment |
#define | GST_META_TAG_VIDEO_STR |
#define | GST_META_TAG_VIDEO_ORIENTATION_STR |
#define | GST_META_TAG_VIDEO_SIZE_STR |
#define | GST_META_TAG_VIDEO_COLORSPACE_STR |
enum | GstVideoFormat |
#define | GST_VIDEO_MAX_PLANES |
#define | GST_VIDEO_MAX_COMPONENTS |
struct | GstVideoFormatInfo |
enum | GstVideoFormatFlags |
enum | GstVideoPackFlags |
#define | GST_VIDEO_SIZE_RANGE |
#define | GST_VIDEO_FPS_RANGE |
#define | GST_VIDEO_FORMATS_ALL |
enum | GstVideoColorRange |
enum | GstVideoColorMatrix |
enum | GstVideoColorPrimaries |
enum | GstVideoTransferFunction |
GstVideoColorimetry | |
struct | GstVideoInfo |
enum | GstVideoInterlaceMode |
enum | GstVideoMultiviewMode |
enum | GstVideoMultiviewFramePacking |
enum | GstVideoMultiviewFlags |
enum | GstVideoFlags |
struct | GstVideoFrame |
enum | GstVideoFrameFlags |
enum | GstVideoBufferFlags |
enum | GstVideoTileType |
enum | GstVideoTileMode |
GstVideoConverter |
gboolean gst_video_calculate_display_ratio (guint *dar_n
,guint *dar_d
,guint video_width
,guint video_height
,guint video_par_n
,guint video_par_d
,guint display_par_n
,guint display_par_d
);
Given the Pixel Aspect Ratio and size of an input video frame, and the pixel aspect ratio of the intended display device, calculates the actual display ratio the video will be rendered with.
dar_n |
Numerator of the calculated display_ratio |
|
dar_d |
Denominator of the calculated display_ratio |
|
video_width |
Width of the video frame in pixels |
|
video_height |
Height of the video frame in pixels |
|
video_par_n |
Numerator of the pixel aspect ratio of the input video. |
|
video_par_d |
Denominator of the pixel aspect ratio of the input video. |
|
display_par_n |
Numerator of the pixel aspect ratio of the display device |
|
display_par_d |
Denominator of the pixel aspect ratio of the display device |
gboolean gst_video_guess_framerate (GstClockTime duration
,gint *dest_n
,gint *dest_d
);
Given the nominal duration of one video frame, this function will check some standard framerates for a close match (within 0.1%) and return one if possible,
It will calculate an arbitrary framerate if no close
match was found, and return FALSE
.
It returns FALSE
if a duration of 0 is passed.
duration |
Nominal duration of one frame |
|
dest_n |
Numerator of the calculated framerate. |
[out][allow-none] |
dest_d |
Denominator of the calculated framerate. |
[out][allow-none] |
Since: 1.6
void (*GstVideoConvertSampleCallback) (GstSample *sample
,GError *error
,gpointer user_data
);
GstSample * gst_video_convert_sample (GstSample *sample
,const GstCaps *to_caps
,GstClockTime timeout
,GError **error
);
Converts a raw video buffer into the specified output caps.
The output caps can be any raw video formats or any image formats (jpeg, png, ...).
The width, height and pixel-aspect-ratio can also be specified in the output caps.
void gst_video_convert_sample_async (GstSample *sample
,const GstCaps *to_caps
,GstClockTime timeout
,GstVideoConvertSampleCallback callback
,gpointer user_data
,GDestroyNotify destroy_notify
);
Converts a raw video buffer into the specified output caps.
The output caps can be any raw video formats or any image formats (jpeg, png, ...).
The width, height and pixel-aspect-ratio can also be specified in the output caps.
callback
will be called after conversion, when an error occured or if conversion didn't
finish after timeout
. callback
will always be called from the thread default
GMainContext
, see g_main_context_get_thread_default()
. If GLib before 2.22 is used,
this will always be the global default main context.
destroy_notify
will be called after the callback was called and user_data
is not needed
anymore.
sample |
||
to_caps |
the GstCaps to convert to |
|
timeout |
the maximum amount of time allowed for the processing. |
|
callback |
|
|
user_data |
extra data that will be passed to the |
|
destroy_notify |
|
void
gst_video_alignment_reset (GstVideoAlignment *align
);
Set align
to its default values with no padding and no alignment.
GstEvent *
gst_video_event_new_still_frame (gboolean in_still
);
Creates a new Still Frame event. If in_still
is TRUE
, then the event
represents the start of a still frame sequence. If it is FALSE
, then
the event ends a still frame sequence.
To parse an event created by gst_video_event_new_still_frame()
use
gst_video_event_parse_still_frame()
.
gboolean gst_video_event_parse_still_frame (GstEvent *event
,gboolean *in_still
);
Parse a GstEvent, identify if it is a Still Frame event, and return the still-frame state from the event if it is. If the event represents the start of a still frame, the in_still variable will be set to TRUE, otherwise FALSE. It is OK to pass NULL for the in_still variable order to just check whether the event is a valid still-frame event.
Create a still frame event using gst_video_event_new_still_frame()
event |
A GstEvent to parse |
|
in_still |
A boolean to receive the still-frame status from the event, or NULL |
GstEvent * gst_video_event_new_downstream_force_key_unit (GstClockTime timestamp
,GstClockTime stream_time
,GstClockTime running_time
,gboolean all_headers
,guint count
);
Creates a new downstream force key unit event. A downstream force key unit event can be sent down the pipeline to request downstream elements to produce a key unit. A downstream force key unit event must also be sent when handling an upstream force key unit event to notify downstream that the latter has been handled.
To parse an event created by gst_video_event_new_downstream_force_key_unit()
use
gst_video_event_parse_downstream_force_key_unit()
.
timestamp |
the timestamp of the buffer that starts a new key unit |
|
stream_time |
the stream_time of the buffer that starts a new key unit |
|
running_time |
the running_time of the buffer that starts a new key unit |
|
all_headers |
|
|
count |
integer that can be used to number key units |
gboolean gst_video_event_parse_downstream_force_key_unit (GstEvent *event
,GstClockTime *timestamp
,GstClockTime *stream_time
,GstClockTime *running_time
,gboolean *all_headers
,guint *count
);
Get timestamp, stream-time, running-time, all-headers and count in the force
key unit event. See gst_video_event_new_downstream_force_key_unit()
for a
full description of the downstream force key unit event.
running_time
will be adjusted for any pad offsets of pads it was passing through.
event |
A GstEvent to parse |
|
timestamp |
A pointer to the timestamp in the event. |
[out] |
stream_time |
A pointer to the stream-time in the event. |
[out] |
running_time |
A pointer to the running-time in the event. |
[out] |
all_headers |
A pointer to the all_headers flag in the event. |
[out] |
count |
A pointer to the count field of the event. |
[out] |
GstEvent * gst_video_event_new_upstream_force_key_unit (GstClockTime running_time
,gboolean all_headers
,guint count
);
Creates a new upstream force key unit event. An upstream force key unit event can be sent to request upstream elements to produce a key unit.
running_time
can be set to request a new key unit at a specific
running_time. If set to GST_CLOCK_TIME_NONE, upstream elements will produce a
new key unit as soon as possible.
To parse an event created by gst_video_event_new_downstream_force_key_unit()
use
gst_video_event_parse_downstream_force_key_unit()
.
running_time |
the running_time at which a new key unit should be produced |
|
all_headers |
|
|
count |
integer that can be used to number key units |
gboolean gst_video_event_parse_upstream_force_key_unit (GstEvent *event
,GstClockTime *running_time
,gboolean *all_headers
,guint *count
);
Get running-time, all-headers and count in the force key unit event. See
gst_video_event_new_upstream_force_key_unit()
for a full description of the
upstream force key unit event.
Create an upstream force key unit event using gst_video_event_new_upstream_force_key_unit()
running_time
will be adjusted for any pad offsets of pads it was passing through.
event |
A GstEvent to parse |
|
running_time |
A pointer to the running_time in the event. |
[out] |
all_headers |
A pointer to the all_headers flag in the event. |
[out] |
count |
A pointer to the count field in the event. |
[out] |
gboolean
gst_video_event_is_force_key_unit (GstEvent *event
);
Checks if an event is a force key unit event. Returns true for both upstream and downstream force key unit events.
void (*GstVideoFormatUnpack) (const GstVideoFormatInfo *info
,GstVideoPackFlags flags
,gpointer dest
,const gpointer data[GST_VIDEO_MAX_PLANES]
,const gint stride[GST_VIDEO_MAX_PLANES]
,gint x
,gint y
,gint width
);
Unpacks width
pixels from the given planes and strides containing data of
format info
. The pixels will be unpacked into dest
which each component
interleaved. dest
should at least be big enough to hold width
*
n_components * size(unpack_format) bytes.
For subsampled formats, the components will be duplicated in the destination array. Reconstruction of the missing components can be performed in a separate step after unpacking.
void (*GstVideoFormatPack) (const GstVideoFormatInfo *info
,GstVideoPackFlags flags
,const gpointer src
,gint sstride
,gpointer data[GST_VIDEO_MAX_PLANES]
,const gint stride[GST_VIDEO_MAX_PLANES]
,GstVideoChromaSite chroma_site
,gint y
,gint width
);
Packs width
pixels from src
to the given planes and strides in the
format info
. The pixels from source have each component interleaved
and will be packed into the planes in data
.
This function operates on pack_lines lines, meaning that src
should
contain at least pack_lines lines with a stride of sstride
and y
should be a multiple of pack_lines.
Subsampled formats will use the horizontally and vertically cosited component from the source. Subsampling should be performed before packing.
Because this function does not have a x coordinate, it is not possible to pack pixels starting from an unaligned position. For tiled images this means that packing should start from a tile coordinate. For subsampled formats this means that a complete pixel needs to be packed.
info |
||
flags |
flags to control the packing |
|
src |
a source array |
|
sstride |
the source array stride |
|
data |
pointers to the destination data planes |
|
stride |
strides of the destination planes |
|
chroma_site |
the chroma siting of the target when subsampled (not used) |
|
y |
the y position in the image to pack to |
|
width |
the amount of pixels to pack. |
#define GST_VIDEO_FORMAT_INFO_IS_YUV(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_YUV)
#define GST_VIDEO_FORMAT_INFO_IS_RGB(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_RGB)
#define GST_VIDEO_FORMAT_INFO_IS_GRAY(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_GRAY)
#define GST_VIDEO_FORMAT_INFO_HAS_ALPHA(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_ALPHA)
#define GST_VIDEO_FORMAT_INFO_IS_LE(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_LE)
#define GST_VIDEO_FORMAT_INFO_HAS_PALETTE(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_PALETTE)
#define GST_VIDEO_FORMAT_INFO_IS_COMPLEX(info) ((info)->flags & GST_VIDEO_FORMAT_FLAG_COMPLEX)
#define GST_VIDEO_FORMAT_INFO_N_COMPONENTS(info) ((info)->n_components)
#define GST_VIDEO_FORMAT_INFO_PSTRIDE(info,c) ((info)->pixel_stride[c])
pixel stride for the given component. This is the amount of bytes to the pixel immediately to the right, so basically bytes from one pixel to the next. When bits < 8, the stride is expressed in bits.
Examples: for 24-bit RGB, the pixel stride would be 3 bytes, while it would be 4 bytes for RGBx or ARGB, and 8 bytes for ARGB64 or AYUV64. For planar formats such as I420 the pixel stride is usually 1. For YUY2 it would be 2 bytes.
#define GST_VIDEO_FORMAT_INFO_N_PLANES(info) ((info)->n_planes)
Number of planes. This is the number of planes the pixel layout is organized in in memory. The number of planes can be less than the number of components (e.g. Y,U,V,A or R, G, B, A) when multiple components are packed into one plane.
Examples: RGB/RGBx/RGBA: 1 plane, 3/3/4 components; I420: 3 planes, 3 components; NV21/NV12: 2 planes, 3 components.
#define GST_VIDEO_FORMAT_INFO_PLANE(info,c) ((info)->plane[c])
Plane number where the given component can be found. A plane may contain data for multiple components.
#define GST_VIDEO_FORMAT_INFO_SCALE_WIDTH(info,c,w) GST_VIDEO_SUB_SCALE ((info)->w_sub[c],(w))
#define GST_VIDEO_FORMAT_INFO_SCALE_HEIGHT(info,c,h) GST_VIDEO_SUB_SCALE ((info)->h_sub[c],(h))
#define GST_VIDEO_FORMAT_INFO_STRIDE(info,strides,comp) ((strides)[(info)->plane[comp]])
Row stride in bytes, that is number of bytes from the first pixel component of a row to the first pixel component in the next row. This might include some row padding (memory not actually used for anything, to make sure the beginning of the next row is aligned in a particular way).
GstVideoFormat gst_video_format_from_masks (gint depth
,gint bpp
,gint endianness
,guint red_mask
,guint green_mask
,guint blue_mask
,guint alpha_mask
);
Find the GstVideoFormat for the given parameters.
depth |
the amount of bits used for a pixel |
|
bpp |
the amount of bits used to store a pixel. This value is bigger than
|
|
endianness |
the endianness of the masks, G_LITTLE_ENDIAN or G_BIG_ENDIAN |
|
red_mask |
the red mask |
|
green_mask |
the green mask |
|
blue_mask |
the blue mask |
|
alpha_mask |
the alpha mask, or 0 if no alpha mask |
a GstVideoFormat or GST_VIDEO_FORMAT_UNKNOWN when the parameters to not specify a known format.
GstVideoFormat
gst_video_format_from_fourcc (guint32 fourcc
);
Converts a FOURCC value into the corresponding GstVideoFormat. If the FOURCC cannot be represented by GstVideoFormat, GST_VIDEO_FORMAT_UNKNOWN is returned.
guint32
gst_video_format_to_fourcc (GstVideoFormat format
);
Converts a GstVideoFormat value into the corresponding FOURCC. Only
a few YUV formats have corresponding FOURCC values. If format
has
no corresponding FOURCC value, 0 is returned.
GstVideoFormat
gst_video_format_from_string (const gchar *format
);
Convert the format
string to its GstVideoFormat.
the GstVideoFormat for format
or GST_VIDEO_FORMAT_UNKNOWN when the
string is not a known format.
const gchar *
gst_video_format_to_string (GstVideoFormat format
);
Returns a string containing a descriptive name for the GstVideoFormat if there is one, or NULL otherwise.
const GstVideoFormatInfo *
gst_video_format_get_info (GstVideoFormat format
);
Get the GstVideoFormatInfo for format
#define GST_VIDEO_CAPS_MAKE(format)
Generic caps string for video, for use in pad templates.
void gst_video_color_range_offsets (GstVideoColorRange range
,const GstVideoFormatInfo *info
,gint offset[GST_VIDEO_MAX_COMPONENTS]
,gint scale[GST_VIDEO_MAX_COMPONENTS]
);
Compute the offset and scale values for each component of info
. For each
component, (c[i] - offset[i]) / scale[i] will scale the component c[i] to the
range [0.0 .. 1.0].
The reverse operation (c[i] * scale[i]) + offset[i] can be used to convert
the component values in range [0.0 .. 1.0] back to their representation in
info
and range
.
gboolean gst_video_color_matrix_get_Kr_Kb (GstVideoColorMatrix matrix
,gdouble *Kr
,gdouble *Kb
);
Get the coefficients used to convert between Y'PbPr and R'G'B' using matrix
.
When:
1 2 |
0.0 <= [Y',R',G',B'] <= 1.0) (-0.5 <= [Pb,Pr] <= 0.5) |
the general conversion is given by:
1 2 3 |
Y' = Kr*R' + (1-Kr-Kb)*G' + Kb*B' Pb = (B'-Y')/(2*(1-Kb)) Pr = (R'-Y')/(2*(1-Kr)) |
and the other way around:
1 2 3 |
R' = Y' + Cr*2*(1-Kr) G' = Y' - Cb*2*(1-Kb)*Kb/(1-Kr-Kb) - Cr*2*(1-Kr)*Kr/(1-Kr-Kb) B' = Y' + Cb*2*(1-Kb) |
Since: 1.6
gdouble gst_video_color_transfer_decode (GstVideoTransferFunction func
,gdouble val
);
Convert val
to its gamma decoded value. This is the inverse operation of
.gst_video_color_transfer_encode()
For a non-linear value L' in the range [0..1], conversion to the linear L is in general performed with a power function like:
1 |
L = L' ^ gamma |
Depending on func
, different formulas might be applied. Some formulas
encode a linear segment in the lower range.
Since: 1.6
gdouble gst_video_color_transfer_encode (GstVideoTransferFunction func
,gdouble val
);
Convert val
to its gamma encoded value.
For a linear value L in the range [0..1], conversion to the non-linear (gamma encoded) L' is in general performed with a power function like:
1 |
L' = L ^ (1 / gamma) |
Depending on func
, different formulas might be applied. Some formulas
encode a linear segment in the lower range.
Since: 1.6
gboolean gst_video_colorimetry_matches (GstVideoColorimetry *cinfo
,const gchar *color
);
Check if the colorimetry information in info
matches that of the
string color
.
gboolean gst_video_colorimetry_is_equal (const GstVideoColorimetry *cinfo
,const GstVideoColorimetry *other
);
Compare the 2 colorimetry sets for equality
Since: 1.6
gboolean gst_video_colorimetry_from_string (GstVideoColorimetry *cinfo
,const gchar *color
);
Parse the colorimetry string and update cinfo
with the parsed
values.
gchar *
gst_video_colorimetry_to_string (GstVideoColorimetry *cinfo
);
Make a string representation of cinfo
.
#define GST_VIDEO_INFO_IS_GRAY(i) (GST_VIDEO_FORMAT_INFO_IS_GRAY((i)->finfo))
#define GST_VIDEO_INFO_HAS_ALPHA(i) (GST_VIDEO_FORMAT_INFO_HAS_ALPHA((i)->finfo))
#define GST_VIDEO_INFO_IS_INTERLACED(i) ((i)->interlace_mode != GST_VIDEO_INTERLACE_MODE_PROGRESSIVE)
#define GST_VIDEO_INFO_FLAG_IS_SET(i,flag) ((GST_VIDEO_INFO_FLAGS(i) & (flag)) == (flag))
#define GST_VIDEO_INFO_FLAG_SET(i,flag) (GST_VIDEO_INFO_FLAGS(i) |= (flag))
#define GST_VIDEO_INFO_FLAG_UNSET(i,flag) (GST_VIDEO_INFO_FLAGS(i) &= ~(flag))
#define GST_VIDEO_INFO_N_PLANES(i) (GST_VIDEO_FORMAT_INFO_N_PLANES((i)->finfo))
#define GST_VIDEO_INFO_N_COMPONENTS(i) GST_VIDEO_FORMAT_INFO_N_COMPONENTS((i)->finfo)
#define GST_VIDEO_INFO_COMP_DEPTH(i,c) GST_VIDEO_FORMAT_INFO_DEPTH((i)->finfo,(c))
#define GST_VIDEO_INFO_COMP_DATA(i,d,c) GST_VIDEO_FORMAT_INFO_DATA((i)->finfo,d,(c))
#define GST_VIDEO_INFO_COMP_OFFSET(i,c) GST_VIDEO_FORMAT_INFO_OFFSET((i)->finfo,(i)->offset,(c))
#define GST_VIDEO_INFO_COMP_STRIDE(i,c) GST_VIDEO_FORMAT_INFO_STRIDE((i)->finfo,(i)->stride,(c))
#define GST_VIDEO_INFO_COMP_WIDTH(i,c) GST_VIDEO_FORMAT_INFO_SCALE_WIDTH((i)->finfo,(c),(i)->width)
#define GST_VIDEO_INFO_COMP_HEIGHT(i,c) GST_VIDEO_FORMAT_INFO_SCALE_HEIGHT((i)->finfo,(c),(i)->height)
#define GST_VIDEO_INFO_COMP_PLANE(i,c) GST_VIDEO_FORMAT_INFO_PLANE((i)->finfo,(c))
#define GST_VIDEO_INFO_COMP_PSTRIDE(i,c) GST_VIDEO_FORMAT_INFO_PSTRIDE((i)->finfo,(c))
#define GST_VIDEO_INFO_COMP_POFFSET(i,c) GST_VIDEO_FORMAT_INFO_POFFSET((i)->finfo,(c))
#define GST_VIDEO_INFO_MULTIVIEW_FLAGS(i) ((i)->ABI.abi.multiview_flags)
#define GST_VIDEO_INFO_MULTIVIEW_MODE(i) ((i)->ABI.abi.multiview_mode)
void
gst_video_info_init (GstVideoInfo *info
);
Initialize info
with default values.
void gst_video_info_set_format (GstVideoInfo *info
,GstVideoFormat format
,guint width
,guint height
);
Set the default info for a video frame of format
and width
and height
.
Note: This initializes info
first, no values are preserved. This function
does not set the offsets correctly for interlaced vertically
subsampled formats.
gboolean gst_video_info_from_caps (GstVideoInfo *info
,const GstCaps *caps
);
Parse caps
and update info
.
GstCaps *
gst_video_info_to_caps (GstVideoInfo *info
);
Convert the values of info
into a GstCaps.
gboolean gst_video_info_convert (GstVideoInfo *info
,GstFormat src_format
,gint64 src_value
,GstFormat dest_format
,gint64 *dest_value
);
Converts among various GstFormat types. This function handles GST_FORMAT_BYTES, GST_FORMAT_TIME, and GST_FORMAT_DEFAULT. For raw video, GST_FORMAT_DEFAULT corresponds to video frames. This function can be used to handle pad queries of the type GST_QUERY_CONVERT.
gboolean gst_video_info_is_equal (const GstVideoInfo *info
,const GstVideoInfo *other
);
Compares two GstVideoInfo and returns whether they are equal or not
void gst_video_info_align (GstVideoInfo *info
,GstVideoAlignment *align
);
Adjust the offset and stride fields in info
so that the padding and
stride alignment in align
is respected.
Extra padding will be added to the right side when stride alignment padding
is required and align
will be updated with the new padding values.
gboolean gst_video_frame_map_id (GstVideoFrame *frame
,GstVideoInfo *info
,GstBuffer *buffer
,gint id
,GstMapFlags flags
);
Use info
and buffer
to fill in the values of frame
with the video frame
information of frame id
.
When id
is -1, the default frame is mapped. When id
!= -1, this function
will return FALSE
when there is no GstVideoMeta with that id.
All video planes of buffer
will be mapped and the pointers will be set in
frame->data
.
frame |
pointer to GstVideoFrame |
|
info |
||
buffer |
the buffer to map |
|
id |
the frame id to map |
|
flags |
gboolean gst_video_frame_map (GstVideoFrame *frame
,GstVideoInfo *info
,GstBuffer *buffer
,GstMapFlags flags
);
Use info
and buffer
to fill in the values of frame
.
All video planes of buffer
will be mapped and the pointers will be set in
frame->data
.
void
gst_video_frame_unmap (GstVideoFrame *frame
);
Unmap the memory previously mapped with gst_video_frame_map.
gboolean gst_video_frame_copy (GstVideoFrame *dest
,const GstVideoFrame *src
);
Copy the contents from src
to dest
.
gboolean gst_video_frame_copy_plane (GstVideoFrame *dest
,const GstVideoFrame *src
,guint plane
);
Copy the plane with index plane
from src
to dest
.
#define GST_VIDEO_FRAME_FLAG_IS_SET(f,fl) ((GST_VIDEO_FRAME_FLAGS(f) & (fl)) == (fl))
#define GST_VIDEO_FRAME_IS_INTERLACED(f) (GST_VIDEO_FRAME_FLAG_IS_SET(f, GST_VIDEO_FRAME_FLAG_INTERLACED))
#define GST_VIDEO_FRAME_IS_TFF(f) (GST_VIDEO_FRAME_FLAG_IS_SET(f, GST_VIDEO_FRAME_FLAG_TFF))
#define GST_VIDEO_FRAME_IS_RFF(f) (GST_VIDEO_FRAME_FLAG_IS_SET(f, GST_VIDEO_FRAME_FLAG_RFF))
#define GST_VIDEO_FRAME_IS_ONEFIELD(f) (GST_VIDEO_FRAME_FLAG_IS_SET(f, GST_VIDEO_FRAME_FLAG_ONEFIELD))
#define GST_VIDEO_FRAME_N_PLANES(f) (GST_VIDEO_INFO_N_PLANES(&(f)->info))
#define GST_VIDEO_FRAME_PLANE_OFFSET(f,p) (GST_VIDEO_INFO_PLANE_OFFSET(&(f)->info,(p)))
#define GST_VIDEO_FRAME_PLANE_STRIDE(f,p) (GST_VIDEO_INFO_PLANE_STRIDE(&(f)->info,(p)))
#define GST_VIDEO_FRAME_N_COMPONENTS(f) GST_VIDEO_INFO_N_COMPONENTS(&(f)->info)
#define GST_VIDEO_FRAME_COMP_DEPTH(f,c) GST_VIDEO_INFO_COMP_DEPTH(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_DATA(f,c) GST_VIDEO_INFO_COMP_DATA(&(f)->info,(f)->data,(c))
#define GST_VIDEO_FRAME_COMP_STRIDE(f,c) GST_VIDEO_INFO_COMP_STRIDE(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_OFFSET(f,c) GST_VIDEO_INFO_COMP_OFFSET(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_WIDTH(f,c) GST_VIDEO_INFO_COMP_WIDTH(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_HEIGHT(f,c) GST_VIDEO_INFO_COMP_HEIGHT(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_PLANE(f,c) GST_VIDEO_INFO_COMP_PLANE(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_PSTRIDE(f,c) GST_VIDEO_INFO_COMP_PSTRIDE(&(f)->info,(c))
#define GST_VIDEO_FRAME_COMP_POFFSET(f,c) GST_VIDEO_INFO_COMP_POFFSET(&(f)->info,(c))
guint gst_video_tile_get_index (GstVideoTileMode mode
,gint x
,gint y
,gint x_tiles
,gint y_tiles
);
Get the tile index of the tile at coordinates x
and y
in the tiled
image of x_tiles
by y_tiles
.
Use this method when mode
is of type GST_VIDEO_TILE_MODE_INDEXED
.
mode |
||
x |
x coordinate |
|
y |
y coordinate |
|
x_tiles |
number of horizintal tiles |
|
y_tiles |
number of vertical tiles |
Since: 1.4
#define GST_VIDEO_TILE_MAKE_MODE(num, type)
use this macro to create new tile modes.
#define GST_VIDEO_TILE_MODE_TYPE(mode) ((mode) & GST_VIDEO_TILE_TYPE_MASK)
Get the tile mode type of mode
#define GST_VIDEO_TILE_MODE_IS_INDEXED(mode) (GST_VIDEO_TILE_MODE_TYPE(mode) == GST_VIDEO_TILE_TYPE_INDEXED)
Check if mode
is an indexed tile type
#define GST_VIDEO_TILE_MAKE_STRIDE(x_tiles, y_tiles)
Encode the number of tile in X and Y into the stride.
#define GST_VIDEO_TILE_X_TILES(stride) ((stride) & GST_VIDEO_TILE_X_TILES_MASK)
Extract the number of tiles in X from the stride value.
#define GST_VIDEO_TILE_Y_TILES(stride) ((stride) >> GST_VIDEO_TILE_Y_TILES_SHIFT)
Extract the number of tiles in Y from the stride value.
gboolean gst_video_blend (GstVideoFrame *dest
,GstVideoFrame *src
,gint x
,gint y
,gfloat global_alpha
);
Lets you blend the src
image into the dest
image
dest |
The GstVideoFrame where to blend |
|
src |
the GstVideoFrame that we want to blend into |
|
x |
The x offset in pixel where the |
|
y |
the y offset in pixel where the |
|
global_alpha |
the global_alpha each per-pixel alpha value is multiplied with |
void gst_video_blend_scale_linear_RGBA (GstVideoInfo *src
,GstBuffer *src_buffer
,gint dest_height
,gint dest_width
,GstVideoInfo *dest
,GstBuffer **dest_buffer
);
Scales a buffer containing RGBA (or AYUV) video. This is an internal helper function which is used to scale subtitle overlays, and may be deprecated in the near future. Use GstVideoScaler to scale video buffers instead.
src |
the GstVideoInfo describing the video data in |
|
src_buffer |
the source buffer containing video pixels to scale |
|
dest_height |
the height in pixels to scale the video data in |
|
dest_width |
the width in pixels to scale the video data in |
|
dest |
pointer to a GstVideoInfo structure that will be filled in
with the details for |
[out] |
dest_buffer |
a pointer to a GstBuffer variable, which will be set to a newly-allocated buffer containing the scaled pixels. |
[out] |
GstVideoConverter * gst_video_converter_new (GstVideoInfo *in_info
,GstVideoInfo *out_info
,GstStructure *config
);
Create a new converter object to convert between in_info
and out_info
with config
.
[skip]
Since: 1.6
void
gst_video_converter_free (GstVideoConverter *convert
);
Free convert
Since: 1.6
const GstStructure *
gst_video_converter_get_config (GstVideoConverter *convert
);
Get the current configuration of convert
.
a GstStructure that remains valid for as long as convert
is valid
or until gst_video_converter_set_config()
is called.
gboolean gst_video_converter_set_config (GstVideoConverter *convert
,GstStructure *config
);
Set config
as extra configuraion for convert
.
If the parameters in config
can not be set exactly, this function returns
FALSE
and will try to update as much state as possible. The new state can
then be retrieved and refined with gst_video_converter_get_config()
.
Look at the GST_VIDEO_CONVERTER_OPT_* fields to check valid configuration option and values.
Since: 1.6
void gst_video_converter_frame (GstVideoConverter *convert
,const GstVideoFrame *src
,GstVideoFrame *dest
);
Convert the pixels of src
into dest
using convert
.
Since: 1.6
const GValue *
gst_video_multiview_get_mono_modes (void
);
A const GValue containing a list of mono video modes
Utility function that returns a GValue with a GstList of mono video modes (mono/left/right) for use in caps negotiations.
Since: 1.6
const GValue *
gst_video_multiview_get_unpacked_modes
(void
);
A const GValue containing a list of 'unpacked' stereo video modes
Utility function that returns a GValue with a GstList of unpacked stereo video modes (separated/frame-by-frame/frame-by-frame-multiview) for use in caps negotiations.
Since: 1.6
const GValue *
gst_video_multiview_get_doubled_height_modes
(void
);
A const GValue containing a list of stereo video modes
Utility function that returns a GValue with a GstList of packed stereo video modes with double the height of a single view for use in caps negotiations. Currently this is top-bottom and row-interleaved.
Since: 1.6
const GValue *
gst_video_multiview_get_doubled_size_modes
(void
);
A const GValue containing a list of stereo video modes
Utility function that returns a GValue with a GstList of packed stereo video modes that have double the width/height of a single view for use in caps negotiation. Currently this is just 'checkerboard' layout.
Since: 1.6
const GValue *
gst_video_multiview_get_doubled_width_modes
(void
);
A const GValue containing a list of stereo video modes
Utility function that returns a GValue with a GstList of packed stereo video modes with double the width of a single view for use in caps negotiations. Currently this is side-by-side, side-by-side-quincunx and column-interleaved.
Since: 1.6
GstVideoMultiviewMode
gst_video_multiview_mode_from_caps_string
(const gchar *caps_mview_mode
);
The GstVideoMultiviewMode value
Given a string from a caps multiview-mode field, output the corresponding GstVideoMultiviewMode or GST_VIDEO_MULTIVIEW_MODE_NONE
Since: 1.6
const gchar *
gst_video_multiview_mode_to_caps_string
(GstVideoMultiviewMode mview_mode
);
The caps string representation of the mode, or NULL if invalid.
Given a GstVideoMultiviewMode returns the multiview-mode caps string for insertion into a caps structure
Since: 1.6
gboolean gst_video_multiview_guess_half_aspect (GstVideoMultiviewMode mv_mode
,guint width
,guint height
,guint par_n
,guint par_d
);
mv_mode |
||
width |
Video frame width in pixels |
|
height |
Video frame height in pixels |
|
par_n |
Numerator of the video pixel-aspect-ratio |
|
par_d |
Denominator of the video pixel-aspect-ratio |
A boolean indicating whether the GST_VIDEO_MULTIVIEW_FLAG_HALF_ASPECT flag should be set.
Utility function that heuristically guess whether a frame-packed stereoscopic video contains half width/height encoded views, or full-frame views by looking at the overall display aspect ratio.
Since: 1.6
void gst_video_multiview_video_info_change_mode (GstVideoInfo *info
,GstVideoMultiviewMode out_mview_mode
,GstVideoMultiviewFlags out_mview_flags
);
Utility function that transforms the width/height/PAR and multiview mode and flags of a GstVideoInfo into the requested mode.
info |
A GstVideoInfo structure to operate on |
|
out_mview_mode |
A GstVideoMultiviewMode value |
|
out_mview_flags |
A set of GstVideoMultiviewFlags |
Since: 1.6
struct GstVideoAlignment { guint padding_top; guint padding_bottom; guint padding_left; guint padding_right; guint stride_align[GST_VIDEO_MAX_PLANES]; };
Extra alignment paramters for the memory of video buffers. This structure is usually used to configure the bufferpool if it supports the GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT.
#define GST_META_TAG_VIDEO_STR "video"
This metadata is relevant for video streams.
Since: 1.2
#define GST_META_TAG_VIDEO_ORIENTATION_STR "orientation"
This metadata stays relevant as long as video orientation is unchanged.
Since: 1.2
#define GST_META_TAG_VIDEO_SIZE_STR "size"
This metadata stays relevant as long as video size is unchanged.
Since: 1.2
#define GST_META_TAG_VIDEO_COLORSPACE_STR "colorspace"
This metadata stays relevant as long as video colorspace is unchanged.
Since: 1.2
Enum value describing the most common video formats.
Unknown or unset video format id |
||
Encoded video format. Only ever use that in caps for special video formats in combination with non-system memory GstCapsFeatures where it does not make sense to specify a real video format. |
||
planar 4:2:0 YUV |
||
planar 4:2:0 YVU (like I420 but UV planes swapped) |
||
packed 4:2:2 YUV (Y0-U0-Y1-V0 Y2-U2-Y3-V2 Y4 ...) |
||
packed 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...) |
||
packed 4:4:4 YUV with alpha channel (A0-Y0-U0-V0 ...) |
||
sparse rgb packed into 32 bit, space last |
||
sparse reverse rgb packed into 32 bit, space last |
||
sparse rgb packed into 32 bit, space first |
||
sparse reverse rgb packed into 32 bit, space first |
||
rgb with alpha channel last |
||
reverse rgb with alpha channel last |
||
rgb with alpha channel first |
||
reverse rgb with alpha channel first |
||
rgb |
||
reverse rgb |
||
planar 4:1:1 YUV |
||
planar 4:2:2 YUV |
||
packed 4:2:2 YUV (Y0-V0-Y1-U0 Y2-V2-Y3-U2 Y4 ...) |
||
planar 4:4:4 YUV |
||
packed 4:2:2 10-bit YUV, complex format |
||
packed 4:2:2 16-bit YUV, Y0-U0-Y1-V1 order |
||
planar 4:2:0 YUV with interleaved UV plane |
||
planar 4:2:0 YUV with interleaved VU plane |
||
8-bit grayscale |
||
16-bit grayscale, most significant byte first |
||
16-bit grayscale, least significant byte first |
||
packed 4:4:4 YUV |
||
rgb 5-6-5 bits per component |
||
reverse rgb 5-6-5 bits per component |
||
rgb 5-5-5 bits per component |
||
reverse rgb 5-5-5 bits per component |
||
packed 10-bit 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...) |
||
planar 4:4:2:0 AYUV |
||
8-bit paletted RGB |
||
planar 4:1:0 YUV |
||
planar 4:1:0 YUV (like YUV9 but UV planes swapped) |
||
packed 4:1:1 YUV (Cb-Y0-Y1-Cr-Y2-Y3 ...) |
||
rgb with alpha channel first, 16 bits per channel |
||
packed 4:4:4 YUV with alpha channel, 16 bits per channel (A0-Y0-U0-V0 ...) |
||
packed 4:4:4 RGB, 10 bits per channel |
||
planar 4:2:0 YUV, 10 bits per channel |
||
planar 4:2:0 YUV, 10 bits per channel |
||
planar 4:2:2 YUV, 10 bits per channel |
||
planar 4:2:2 YUV, 10 bits per channel |
||
planar 4:4:4 YUV, 10 bits per channel |
||
planar 4:4:4 YUV, 10 bits per channel |
||
planar 4:4:4 RGB, 8 bits per channel |
||
planar 4:4:4 RGB, 10 bits per channel |
||
planar 4:4:4 RGB, 10 bits per channel |
||
planar 4:2:2 YUV with interleaved UV plane |
||
planar 4:4:4 YUV with interleaved UV plane |
||
NV12 with 64x32 tiling in zigzag pattern |
||
planar 4:4:2:0 YUV, 10 bits per channel |
||
planar 4:4:2:0 YUV, 10 bits per channel |
||
planar 4:4:2:2 YUV, 10 bits per channel |
||
planar 4:4:2:2 YUV, 10 bits per channel |
||
planar 4:4:4:4 YUV, 10 bits per channel |
||
planar 4:4:4:4 YUV, 10 bits per channel |
||
planar 4:2:2 YUV with interleaved VU plane (Since 1.6) |
struct GstVideoFormatInfo { GstVideoFormat format; const gchar *name; const gchar *description; GstVideoFormatFlags flags; guint bits; guint n_components; guint shift[GST_VIDEO_MAX_COMPONENTS]; guint depth[GST_VIDEO_MAX_COMPONENTS]; gint pixel_stride[GST_VIDEO_MAX_COMPONENTS]; guint n_planes; guint plane[GST_VIDEO_MAX_COMPONENTS]; guint poffset[GST_VIDEO_MAX_COMPONENTS]; guint w_sub[GST_VIDEO_MAX_COMPONENTS]; guint h_sub[GST_VIDEO_MAX_COMPONENTS]; GstVideoFormat unpack_format; GstVideoFormatUnpack unpack_func; gint pack_lines; GstVideoFormatPack pack_func; GstVideoTileMode tile_mode; guint tile_ws; guint tile_hs; gpointer _gst_reserved[GST_PADDING]; };
Information for a video format.
GstVideoFormat |
||
const gchar * |
string representation of the format |
|
const gchar * |
use readable description of the format |
|
GstVideoFormatFlags |
||
guint |
The number of bits used to pack data items. This can be less than 8 when multiple pixels are stored in a byte. for values > 8 multiple bytes should be read according to the endianness flag before applying the shift and mask. |
|
guint |
the number of components in the video format. |
|
guint |
the number of bits to shift away to get the component data |
|
guint |
the depth in bits for each component |
|
gint |
the pixel stride of each component. This is the amount of bytes to the pixel immediately to the right. When bits < 8, the stride is expressed in bits. For 24-bit RGB, this would be 3 bytes, for example, while it would be 4 bytes for RGBx or ARGB. |
|
guint |
the number of planes for this format. The number of planes can be less than the amount of components when multiple components are packed into one plane. |
|
guint |
the plane number where a component can be found |
|
guint |
the offset in the plane where the first pixel of the components can be found. |
|
guint |
subsampling factor of the width for the component. Use GST_VIDEO_SUB_SCALE to scale a width. |
|
guint |
subsampling factor of the height for the component. Use GST_VIDEO_SUB_SCALE to scale a height. |
|
GstVideoFormat |
the format of the unpacked pixels. This format must have the GST_VIDEO_FORMAT_FLAG_UNPACK flag set. |
|
GstVideoFormatUnpack |
an unpack function for this format |
|
gint |
the amount of lines that will be packed |
|
GstVideoFormatPack |
an pack function for this format |
|
GstVideoTileMode |
The tiling mode
|
|
guint |
||
guint |
||
gpointer |
The different video flags that a format info can have.
The video format is YUV, components are numbered 0=Y, 1=U, 2=V. |
||
The video format is RGB, components are numbered 0=R, 1=G, 2=B. |
||
The video is gray, there is one gray component with index 0. |
||
The video format has an alpha components with the number 3. |
||
The video format has data stored in little endianness. |
||
The video format has a palette. The palette is stored in the second plane and indexes are stored in the first plane. |
||
The video format has a complex layout that can't be described with the usual information in the GstVideoFormatInfo. |
||
This format can be used in a GstVideoFormatUnpack and GstVideoFormatPack function. |
||
The format is tiled, there is tiling information in the last plane. |
The different flags that can be used when packing and unpacking.
No flag |
||
When the source has a smaller depth than the target format, set the least significant bits of the target to 0. This is likely sightly faster but less accurate. When this flag is not specified, the most significant bits of the source are duplicated in the least significant bits of the destination. |
||
The source is interlaced. The unpacked format will be interlaced as well with each line containing information from alternating fields. (Since 1.2) |
Possible color range values. These constants are defined for 8 bit color values and can be scaled for other bit depths.
The color matrix is used to convert between Y'PbPr and non-linear RGB (R'G'B')
The color primaries define the how to transform linear RGB values to and from the CIE XYZ colorspace.
The video transfer function defines the formula for converting between non-linear RGB (R'G'B') and linear RGB
unknown transfer function |
||
linear RGB, gamma 1.0 curve |
||
Gamma 1.8 curve |
||
Gamma 2.0 curve |
||
Gamma 2.2 curve |
||
Gamma 2.2 curve with a linear segment in the lower range |
||
Gamma 2.2 curve with a linear segment in the lower range |
||
Gamma 2.4 curve with a linear segment in the lower range |
||
Gamma 2.8 curve |
||
Logarithmic transfer characteristic 100:1 range |
||
Logarithmic transfer characteristic 316.22777:1 range |
||
Gamma 2.2 curve with a linear segment in the lower range. Used for BT.2020 with 12 bits per component. Since: 1.6. |
typedef struct { GstVideoColorRange range; GstVideoColorMatrix matrix; GstVideoTransferFunction transfer; GstVideoColorPrimaries primaries; } GstVideoColorimetry;
Structure describing the color info.
GstVideoColorRange |
the color range. This is the valid range for the samples. It is used to convert the samples to Y'PbPr values. |
|
GstVideoColorMatrix |
the color matrix. Used to convert between Y'PbPr and non-linear RGB (R'G'B') |
|
GstVideoTransferFunction |
the transfer function. used to convert between R'G'B' and RGB |
|
GstVideoColorPrimaries |
color primaries. used to convert between R'G'B' and CIE XYZ |
struct GstVideoInfo { const GstVideoFormatInfo *finfo; GstVideoInterlaceMode interlace_mode; GstVideoFlags flags; gint width; gint height; gsize size; gint views; GstVideoChromaSite chroma_site; GstVideoColorimetry colorimetry; gint par_n; gint par_d; gint fps_n; gint fps_d; gsize offset[GST_VIDEO_MAX_PLANES]; gint stride[GST_VIDEO_MAX_PLANES]; /* Union preserves padded struct size for backwards compat * Consumer code should use the accessor macros for fields */ union { struct { GstVideoMultiviewMode multiview_mode; GstVideoMultiviewFlags multiview_flags; } abi; };
Information describing image properties. This information can be filled
in from GstCaps with gst_video_info_from_caps()
. The information is also used
to store the specific video info when mapping a video frame with
gst_video_frame_map()
.
Use the provided macros to access the info in this structure.
const GstVideoFormatInfo * |
the format info of the video |
|
GstVideoInterlaceMode |
the interlace mode |
|
GstVideoFlags |
additional video flags |
|
gint |
the width of the video |
|
gint |
the height of the video |
|
the default size of one frame |
||
gint |
the number of views for multiview video |
|
GstVideoChromaSite |
||
GstVideoColorimetry |
the colorimetry info |
|
gint |
the pixel-aspect-ratio numerator |
|
gint |
the pixel-aspect-ratio demnominator |
|
gint |
the framerate numerator |
|
gint |
the framerate demnominator |
|
offsets of the planes |
||
gint |
strides of the planes |
The possible values of the GstVideoInterlaceMode describing the interlace mode of the stream.
all frames are progressive |
||
2 fields are interleaved in one video frame. Extra buffer flags describe the field order. |
||
frames contains both interlaced and progressive video, the buffer flags describe the frame and fields. |
||
2 fields are stored in one buffer, use the frame ID to get access to the required field. For multiview (the 'views' property > 1) the fields of view N can be found at frame ID (N * 2) and (N * 2) + 1. Each field has only half the amount of lines as noted in the height property. This mode requires multiple GstVideoMeta metadata to describe the fields. |
All possible stereoscopic 3D and multiview representations. In conjunction with GstVideoMultiviewFlags, describes how multiview content is being transported in the stream.
A special value indicating no multiview information. Used in GstVideoInfo and other places to indicate that no specific multiview handling has been requested or provided. This value is never carried on caps. |
||
All frames are monoscopic. |
||
All frames represent a left-eye view. |
||
All frames represent a right-eye view. |
||
Left and right eye views are provided in the left and right half of the frame respectively. |
||
Left and right eye views are provided in the left and right half of the frame, but have been sampled using quincunx method, with half-pixel offset between the 2 views. |
||
Alternating vertical columns of pixels represent the left and right eye view respectively. |
||
Alternating horizontal rows of pixels represent the left and right eye view respectively. |
||
The top half of the frame contains the left eye, and the bottom half the right eye. |
||
Pixels are arranged with alternating pixels representing left and right eye views in a checkerboard fashion. |
||
Left and right eye views are provided in separate frames alternately. |
||
Multiple independent views are provided in separate frames in sequence. This method only applies to raw video buffers at the moment. Specific view identification is via the GstVideoMultiviewMeta and GstVideoMeta(s) on raw video buffers. |
||
Multiple views are provided as separate GstMemory framebuffers attached to each GstBuffer, described by the GstVideoMultiviewMeta and GstVideoMeta(s) |
GstVideoMultiviewFramePacking represents the subset of GstVideoMultiviewMode values that can be applied to any video frame without needing extra metadata. It can be used by elements that provide a property to override the multiview interpretation of a video stream when the video doesn't contain any markers.
This enum is used (for example) on playbin, to re-interpret a played video stream as a stereoscopic video. The individual enum values are equivalent to and have the same value as the matching GstVideoMultiviewMode.
A special value indicating no frame packing info. |
||
All frames are monoscopic. |
||
All frames represent a left-eye view. |
||
All frames represent a right-eye view. |
||
Left and right eye views are provided in the left and right half of the frame respectively. |
||
Left and right eye views are provided in the left and right half of the frame, but have been sampled using quincunx method, with half-pixel offset between the 2 views. |
||
Alternating vertical columns of pixels represent the left and right eye view respectively. |
||
Alternating horizontal rows of pixels represent the left and right eye view respectively. |
||
The top half of the frame contains the left eye, and the bottom half the right eye. |
||
Pixels are arranged with alternating pixels representing left and right eye views in a checkerboard fashion. |
GstVideoMultiviewFlags are used to indicate extra properties of a stereo/multiview stream beyond the frame layout and buffer mapping that is conveyed in the GstMultiviewMode.
No flags |
||
For stereo streams, the normal arrangement of left and right views is reversed. |
||
The left view is vertically mirrored. |
||
The left view is horizontally mirrored. |
||
The right view is vertically mirrored. |
||
The right view is horizontally mirrored. |
||
For frame-packed multiview modes, indicates that the individual views have been encoded with half the true width or height and should be scaled back up for display. This flag is used for overriding input layout interpretation by adjusting pixel-aspect-ratio. For side-by-side, column interleaved or checkerboard packings, the pixel width will be doubled. For row interleaved and top-bottom encodings, pixel height will be doubled. |
||
The video stream contains both
mono and multiview portions, signalled on each buffer by the
absence or presence of the |
struct GstVideoFrame { GstVideoInfo info; GstVideoFrameFlags flags; GstBuffer *buffer; gpointer meta; gint id; gpointer data[GST_VIDEO_MAX_PLANES]; GstMapInfo map[GST_VIDEO_MAX_PLANES]; };
A video frame obtained from gst_video_frame_map()
GstVideoInfo |
the GstVideoInfo |
|
GstVideoFrameFlags |
GstVideoFrameFlags for the frame |
|
GstBuffer * |
the mapped buffer |
|
gpointer |
pointer to metadata if any |
|
gint |
id of the mapped frame. the id can for example be used to indentify the frame in case of multiview video. |
|
gpointer |
pointers to the plane data |
|
GstMapInfo |
mappings of the planes |
Extra video frame flags
no flags |
||
The video frame is interlaced. In mixed interlace-mode, this flags specifies if the frame is interlace or progressive. |
||
The video frame has the top field first |
||
The video frame has the repeat flag |
||
The video frame has one field |
||
The video contains one or more non-mono views |
||
The video frame is the first in a set of corresponding views provided as sequential frames. |
Additional video buffer flags. These flags can potentially be used on any buffers carrying video data - even encoded data.
If the GstBuffer is interlaced. In mixed interlace-mode, this flags specifies if the frame is interlaced or progressive. |
||
If the GstBuffer is interlaced, then the first field in the video frame is the top field. If unset, the bottom field is first. |
||
If the GstBuffer is interlaced, then the first field
(as defined by the |
||
If the GstBuffer is interlaced, then only the
first field (as defined by the |
||
The GstBuffer contains one or more specific views, such as left or right eye view. This flags is set on any buffer that contains non-mono content - even for streams that contain only a single viewpoint. In mixed mono / non-mono streams, the absense of the flag marks mono buffers. |
||
When conveying stereo/multiview content with frame-by-frame methods, this flag marks the first buffer in a bundle of frames that belong together. |
||
Offset to define more flags |
Enum value describing the most common tiling types.
Tiles are indexed. Use
|