GstCudaAllocator
GObject ╰──GInitiallyUnowned ╰──GstObject ╰──GstAllocator ╰──GstCudaAllocator ╰──GstCudaPoolAllocator
A GstAllocator subclass for cuda memory
Members
parent
(GstAllocator)
–
Since : 1.22
Class structure
GstCuda.CudaAllocator
GObject.Object ╰──GObject.InitiallyUnowned ╰──Gst.Object ╰──Gst.Allocator ╰──GstCuda.CudaAllocator ╰──GstCuda.CudaPoolAllocator
A Gst.Allocator subclass for cuda memory
Members
parent
(Gst.Allocator)
–
Since : 1.22
GstCuda.CudaAllocator
GObject.Object ╰──GObject.InitiallyUnowned ╰──Gst.Object ╰──Gst.Allocator ╰──GstCuda.CudaAllocator ╰──GstCuda.CudaPoolAllocator
A Gst.Allocator subclass for cuda memory
Members
parent
(Gst.Allocator)
–
Since : 1.22
Methods
gst_cuda_allocator_alloc
GstMemory * gst_cuda_allocator_alloc (GstCudaAllocator * allocator, GstCudaContext * context, GstCudaStream * stream, const GstVideoInfo * info)
Parameters:
allocator
(
[transfer: none][allow-none])
–
context
(
[transfer: none])
–
stream
(
[transfer: none][allow-none])
–
info
–
a newly allocated GstCudaMemory
Since : 1.22
GstCuda.CudaAllocator.prototype.alloc
function GstCuda.CudaAllocator.prototype.alloc(context: GstCuda.CudaContext, stream: GstCuda.CudaStream, info: GstVideo.VideoInfo): {
// javascript wrapper for 'gst_cuda_allocator_alloc'
}
Parameters:
a newly allocated GstCuda.CudaMemory
Since : 1.22
GstCuda.CudaAllocator.alloc
def GstCuda.CudaAllocator.alloc (self, context, stream, info):
#python wrapper for 'gst_cuda_allocator_alloc'
Parameters:
a newly allocated GstCuda.CudaMemory
Since : 1.22
gst_cuda_allocator_alloc_wrapped
GstMemory * gst_cuda_allocator_alloc_wrapped (GstCudaAllocator * allocator, GstCudaContext * context, GstCudaStream * stream, const GstVideoInfo * info, CUdeviceptr dev_ptr, gpointer user_data, GDestroyNotify notify)
Allocates a new memory that wraps the given CUDA device memory.
info must represent actual memory layout, in other words, offset, stride and size fields of info should be matched with memory layout of dev_ptr
By default, wrapped dev_ptr will be freed at the time when GstMemory is freed if notify is NULL. Otherwise, if caller sets notify, freeing dev_ptr is callers responsibility and default GstCudaAllocator will not free it.
Parameters:
allocator
(
[transfer: none][allow-none])
–
context
(
[transfer: none])
–
stream
(
[transfer: none][allow-none])
–
info
–
dev_ptr
–
a CUdeviceptr CUDA device memory
user_data
(
[allow-none])
–
user data
notify
–
(allow-none) (scope async) (closure user_data): Called with user_data when the memory is freed
a new GstMemory
Since : 1.24
GstCuda.CudaAllocator.prototype.alloc_wrapped
function GstCuda.CudaAllocator.prototype.alloc_wrapped(context: GstCuda.CudaContext, stream: GstCuda.CudaStream, info: GstVideo.VideoInfo, dev_ptr: CudaGst.deviceptr, user_data: Object, notify: GLib.DestroyNotify): {
// javascript wrapper for 'gst_cuda_allocator_alloc_wrapped'
}
Allocates a new memory that wraps the given CUDA device memory.
info must represent actual memory layout, in other words, offset, stride and size fields of info should be matched with memory layout of dev_ptr
By default, wrapped dev_ptr will be freed at the time when Gst.Memory is freed if notify is null. Otherwise, if caller sets notify, freeing dev_ptr is callers responsibility and default GstCuda.CudaAllocator will not free it.
Parameters:
dev_ptr
(CudaGst.deviceptr)
–
a CUdeviceptr CUDA device memory
user data
(allow-none) (scope async) (closure user_data): Called with user_data when the memory is freed
a new Gst.Memory
Since : 1.24
GstCuda.CudaAllocator.alloc_wrapped
def GstCuda.CudaAllocator.alloc_wrapped (self, context, stream, info, dev_ptr, *user_data, notify):
#python wrapper for 'gst_cuda_allocator_alloc_wrapped'
Allocates a new memory that wraps the given CUDA device memory.
info must represent actual memory layout, in other words, offset, stride and size fields of info should be matched with memory layout of dev_ptr
By default, wrapped dev_ptr will be freed at the time when Gst.Memory is freed if notify is None. Otherwise, if caller sets notify, freeing dev_ptr is callers responsibility and default GstCuda.CudaAllocator will not free it.
Parameters:
dev_ptr
(CudaGst.deviceptr)
–
a CUdeviceptr CUDA device memory
user data
(allow-none) (scope async) (closure user_data): Called with user_data when the memory is freed
a new Gst.Memory
Since : 1.24
gst_cuda_allocator_set_active
gboolean gst_cuda_allocator_set_active (GstCudaAllocator * allocator, gboolean active)
Controls the active state of allocator. Default GstCudaAllocator is stateless and therefore active state is ignored, but subclass implementation (e.g., GstCudaPoolAllocator) will require explicit active state control for its internal resource management.
This method is conceptually identical to gst_buffer_pool_set_active method.
TRUE if active state of allocator was successfully updated.
Since : 1.24
GstCuda.CudaAllocator.prototype.set_active
function GstCuda.CudaAllocator.prototype.set_active(active: Number): {
// javascript wrapper for 'gst_cuda_allocator_set_active'
}
Controls the active state of allocator. Default GstCuda.CudaAllocator is stateless and therefore active state is ignored, but subclass implementation (e.g., GstCuda.CudaPoolAllocator) will require explicit active state control for its internal resource management.
This method is conceptually identical to gst_buffer_pool_set_active method.
Parameters:
the new active state
Since : 1.24
GstCuda.CudaAllocator.set_active
def GstCuda.CudaAllocator.set_active (self, active):
#python wrapper for 'gst_cuda_allocator_set_active'
Controls the active state of allocator. Default GstCuda.CudaAllocator is stateless and therefore active state is ignored, but subclass implementation (e.g., GstCuda.CudaPoolAllocator) will require explicit active state control for its internal resource management.
This method is conceptually identical to gst_buffer_pool_set_active method.
Parameters:
the new active state
Since : 1.24
gst_cuda_allocator_virtual_alloc
GstMemory * gst_cuda_allocator_virtual_alloc (GstCudaAllocator * allocator, GstCudaContext * context, GstCudaStream * stream, const GstVideoInfo * info, const CUmemAllocationProp* prop, CUmemAllocationGranularity_flags granularity_flags)
Allocates new GstMemory object with CUDA virtual memory.
Parameters:
allocator
–
context
–
stream
–
info
–
prop
–
allocation property
granularity_flags
–
allocation flags
a newly allocated memory object or NULL if allocation is not supported
Since : 1.24
GstCuda.CudaAllocator.prototype.virtual_alloc
function GstCuda.CudaAllocator.prototype.virtual_alloc(context: GstCuda.CudaContext, stream: GstCuda.CudaStream, info: GstVideo.VideoInfo, prop: CudaGst.memAllocationProp, granularity_flags: CudaGst.memAllocationGranularity_flags): {
// javascript wrapper for 'gst_cuda_allocator_virtual_alloc'
}
Allocates new Gst.Memory object with CUDA virtual memory.
Parameters:
prop
(CudaGst.memAllocationProp)
–
allocation property
granularity_flags
(CudaGst.memAllocationGranularity_flags)
–
allocation flags
a newly allocated memory object or null if allocation is not supported
Since : 1.24
GstCuda.CudaAllocator.virtual_alloc
def GstCuda.CudaAllocator.virtual_alloc (self, context, stream, info, prop, granularity_flags):
#python wrapper for 'gst_cuda_allocator_virtual_alloc'
Allocates new Gst.Memory object with CUDA virtual memory.
Parameters:
prop
(CudaGst.memAllocationProp)
–
allocation property
granularity_flags
(CudaGst.memAllocationGranularity_flags)
–
allocation flags
a newly allocated memory object or None if allocation is not supported
Since : 1.24
Virtual Methods
set_active
gboolean set_active (GstCudaAllocator * allocator, gboolean active)
Since : 1.24
vfunc_set_active
function vfunc_set_active(allocator: GstCuda.CudaAllocator, active: Number): {
// javascript implementation of the 'set_active' virtual method
}
Parameters:
the new active state
Since : 1.24
do_set_active
def do_set_active (allocator, active):
#python implementation of the 'set_active' virtual method
Parameters:
the new active state
Since : 1.24
GstCudaMemory
Members
mem
(GstMemory)
–
context
(GstCudaContext *)
–
info
(GstVideoInfo)
–
Since : 1.22
GstCuda.CudaMemory
Members
mem
(Gst.Memory)
–
context
(GstCuda.CudaContext)
–
info
(GstVideo.VideoInfo)
–
Since : 1.22
GstCuda.CudaMemory
Members
mem
(Gst.Memory)
–
context
(GstCuda.CudaContext)
–
info
(GstVideo.VideoInfo)
–
Since : 1.22
Methods
gst_cuda_memory_export
gboolean gst_cuda_memory_export (GstCudaMemory * mem, gpointer os_handle)
Exports virtual memory handle to OS specific handle.
On Windows, os_handle should be pointer to HANDLE (i.e., void **), and pointer to file descriptor (i.e., int *) on Linux.
The returned os_handle is owned by mem and therefore caller shouldn't close the handle.
TRUE if successful
Since : 1.24
GstCuda.CudaMemory.prototype.export
function GstCuda.CudaMemory.prototype.export(): {
// javascript wrapper for 'gst_cuda_memory_export'
}
Exports virtual memory handle to OS specific handle.
On Windows, os_handle should be pointer to HANDLE (i.e., void **), and pointer to file descriptor (i.e., int *) on Linux.
The returned os_handle is owned by mem and therefore caller shouldn't close the handle.
Parameters:
Since : 1.24
GstCuda.CudaMemory.export
def GstCuda.CudaMemory.export (self):
#python wrapper for 'gst_cuda_memory_export'
Exports virtual memory handle to OS specific handle.
On Windows, os_handle should be pointer to HANDLE (i.e., void **), and pointer to file descriptor (i.e., int *) on Linux.
The returned os_handle is owned by mem and therefore caller shouldn't close the handle.
Parameters:
Since : 1.24
gst_cuda_memory_get_alloc_method
GstCudaMemoryAllocMethod gst_cuda_memory_get_alloc_method (GstCudaMemory * mem)
Query allocation method
Parameters:
mem
–
Since : 1.24
GstCuda.CudaMemory.prototype.get_alloc_method
function GstCuda.CudaMemory.prototype.get_alloc_method(): {
// javascript wrapper for 'gst_cuda_memory_get_alloc_method'
}
Query allocation method
Parameters:
Since : 1.24
GstCuda.CudaMemory.get_alloc_method
def GstCuda.CudaMemory.get_alloc_method (self):
#python wrapper for 'gst_cuda_memory_get_alloc_method'
Query allocation method
Parameters:
Since : 1.24
gst_cuda_memory_get_stream
GstCudaStream * gst_cuda_memory_get_stream (GstCudaMemory * mem)
Gets CUDA stream object associated with mem
Parameters:
mem
–
a GstCudaStream or NULL if default CUDA stream is in use
Since : 1.24
GstCuda.CudaMemory.prototype.get_stream
function GstCuda.CudaMemory.prototype.get_stream(): {
// javascript wrapper for 'gst_cuda_memory_get_stream'
}
Gets CUDA stream object associated with mem
Parameters:
a GstCuda.CudaStream or null if default CUDA stream is in use
Since : 1.24
GstCuda.CudaMemory.get_stream
def GstCuda.CudaMemory.get_stream (self):
#python wrapper for 'gst_cuda_memory_get_stream'
Gets CUDA stream object associated with mem
Parameters:
a GstCuda.CudaStream or None if default CUDA stream is in use
Since : 1.24
gst_cuda_memory_get_texture
gboolean gst_cuda_memory_get_texture (GstCudaMemory * mem, guint plane, CUfilter_mode filter_mode, CUtexObject* texture)
Creates CUtexObject with given parameters
Parameters:
mem
–
plane
–
the plane index
filter_mode
–
filter mode
texture
(
[out][transfer: none])
–
a pointer to CUtexObject object
TRUE if successful
Since : 1.24
GstCuda.CudaMemory.prototype.get_texture
function GstCuda.CudaMemory.prototype.get_texture(plane: Number, filter_mode: CudaGst.filter_mode): {
// javascript wrapper for 'gst_cuda_memory_get_texture'
}
Creates CUtexObject with given parameters
Parameters:
the plane index
filter_mode
(CudaGst.filter_mode)
–
filter mode
Returns a tuple made of:
texture
(CudaGst.texObject
)
–
true if successful
Since : 1.24
GstCuda.CudaMemory.get_texture
def GstCuda.CudaMemory.get_texture (self, plane, filter_mode):
#python wrapper for 'gst_cuda_memory_get_texture'
Creates CUtexObject with given parameters
Parameters:
the plane index
filter_mode
(CudaGst.filter_mode)
–
filter mode
Returns a tuple made of:
texture
(CudaGst.texObject
)
–
True if successful
Since : 1.24
gst_cuda_memory_get_token_data
gpointer gst_cuda_memory_get_token_data (GstCudaMemory * mem, gint64 token)
Gets back user data pointer stored via gst_cuda_memory_set_token_data
user data pointer or NULL
Since : 1.24
GstCuda.CudaMemory.prototype.get_token_data
function GstCuda.CudaMemory.prototype.get_token_data(token: Number): {
// javascript wrapper for 'gst_cuda_memory_get_token_data'
}
Gets back user data pointer stored via GstCuda.CudaMemory.prototype.set_token_data
Since : 1.24
GstCuda.CudaMemory.get_token_data
def GstCuda.CudaMemory.get_token_data (self, token):
#python wrapper for 'gst_cuda_memory_get_token_data'
Gets back user data pointer stored via GstCuda.CudaMemory.set_token_data
Since : 1.24
gst_cuda_memory_get_user_data
gpointer gst_cuda_memory_get_user_data (GstCudaMemory * mem)
Gets user data pointer stored via gst_cuda_allocator_alloc_wrapped
Parameters:
mem
–
the user data pointer
Since : 1.24
GstCuda.CudaMemory.prototype.get_user_data
function GstCuda.CudaMemory.prototype.get_user_data(): {
// javascript wrapper for 'gst_cuda_memory_get_user_data'
}
Gets user data pointer stored via GstCuda.CudaAllocator.prototype.alloc_wrapped
Parameters:
the user data pointer
Since : 1.24
GstCuda.CudaMemory.get_user_data
def GstCuda.CudaMemory.get_user_data (self):
#python wrapper for 'gst_cuda_memory_get_user_data'
Gets user data pointer stored via GstCuda.CudaAllocator.alloc_wrapped
Parameters:
the user data pointer
Since : 1.24
gst_cuda_memory_set_token_data
gst_cuda_memory_set_token_data (GstCudaMemory * mem, gint64 token, gpointer data, GDestroyNotify notify)
Sets an opaque user data on a GstCudaMemory
Parameters:
mem
–
token
–
an user token
data
–
an user data
notify
–
function to invoke with data as argument, when data needs to be freed
Since : 1.24
GstCuda.CudaMemory.prototype.set_token_data
function GstCuda.CudaMemory.prototype.set_token_data(token: Number, data: Object, notify: GLib.DestroyNotify): {
// javascript wrapper for 'gst_cuda_memory_set_token_data'
}
Sets an opaque user data on a GstCuda.CudaMemory
Parameters:
an user token
an user data
function to invoke with data as argument, when data needs to be freed
Since : 1.24
GstCuda.CudaMemory.set_token_data
def GstCuda.CudaMemory.set_token_data (self, token, data, notify):
#python wrapper for 'gst_cuda_memory_set_token_data'
Sets an opaque user data on a GstCuda.CudaMemory
Parameters:
an user token
an user data
function to invoke with data as argument, when data needs to be freed
Since : 1.24
gst_cuda_memory_sync
gst_cuda_memory_sync (GstCudaMemory * mem)
Performs synchronization if needed
Parameters:
mem
–
Since : 1.24
GstCuda.CudaMemory.prototype.sync
function GstCuda.CudaMemory.prototype.sync(): {
// javascript wrapper for 'gst_cuda_memory_sync'
}
Performs synchronization if needed
Parameters:
Since : 1.24
GstCuda.CudaMemory.sync
def GstCuda.CudaMemory.sync (self):
#python wrapper for 'gst_cuda_memory_sync'
Performs synchronization if needed
Parameters:
Since : 1.24
Functions
gst_cuda_memory_init_once
gst_cuda_memory_init_once ()
Ensures that the GstCudaAllocator is initialized and ready to be used.
Since : 1.22
GstCuda.CudaMemory.prototype.init_once
function GstCuda.CudaMemory.prototype.init_once(): {
// javascript wrapper for 'gst_cuda_memory_init_once'
}
Ensures that the GstCuda.CudaAllocator is initialized and ready to be used.
Since : 1.22
GstCuda.CudaMemory.init_once
def GstCuda.CudaMemory.init_once ():
#python wrapper for 'gst_cuda_memory_init_once'
Ensures that the GstCuda.CudaAllocator is initialized and ready to be used.
Since : 1.22
GstCudaPoolAllocator
GObject ╰──GInitiallyUnowned ╰──GstObject ╰──GstAllocator ╰──GstCudaAllocator ╰──GstCudaPoolAllocator
A GstCudaAllocator subclass for cuda memory pool
Members
parent
(GstCudaAllocator)
–
context
(GstCudaContext *)
–
stream
(GstCudaStream *)
–
info
(GstVideoInfo)
–
Since : 1.24
Class structure
GstCuda.CudaPoolAllocatorClass
Attributes
parent_class
(GstCuda.CudaAllocatorClass)
–
GstCuda.CudaPoolAllocatorClass
Attributes
parent_class
(GstCuda.CudaAllocatorClass)
–
GstCuda.CudaPoolAllocator
GObject.Object ╰──GObject.InitiallyUnowned ╰──Gst.Object ╰──Gst.Allocator ╰──GstCuda.CudaAllocator ╰──GstCuda.CudaPoolAllocator
A GstCuda.CudaAllocator subclass for cuda memory pool
Members
parent
(GstCuda.CudaAllocator)
–
context
(GstCuda.CudaContext)
–
stream
(GstCuda.CudaStream)
–
info
(GstVideo.VideoInfo)
–
Since : 1.24
GstCuda.CudaPoolAllocator
GObject.Object ╰──GObject.InitiallyUnowned ╰──Gst.Object ╰──Gst.Allocator ╰──GstCuda.CudaAllocator ╰──GstCuda.CudaPoolAllocator
A GstCuda.CudaAllocator subclass for cuda memory pool
Members
parent
(GstCuda.CudaAllocator)
–
context
(GstCuda.CudaContext)
–
stream
(GstCuda.CudaStream)
–
info
(GstVideo.VideoInfo)
–
Since : 1.24
Constructors
gst_cuda_pool_allocator_new
GstCudaPoolAllocator * gst_cuda_pool_allocator_new (GstCudaContext * context, GstCudaStream * stream, const GstVideoInfo * info)
Creates a new GstCudaPoolAllocator instance.
Parameters:
context
–
stream
(
[allow-none])
–
info
–
a new GstCudaPoolAllocator instance
Since : 1.24
GstCuda.CudaPoolAllocator.prototype.new
function GstCuda.CudaPoolAllocator.prototype.new(context: GstCuda.CudaContext, stream: GstCuda.CudaStream, info: GstVideo.VideoInfo): {
// javascript wrapper for 'gst_cuda_pool_allocator_new'
}
Creates a new GstCuda.CudaPoolAllocator instance.
Parameters:
a new GstCuda.CudaPoolAllocator instance
Since : 1.24
GstCuda.CudaPoolAllocator.new
def GstCuda.CudaPoolAllocator.new (context, stream, info):
#python wrapper for 'gst_cuda_pool_allocator_new'
Creates a new GstCuda.CudaPoolAllocator instance.
Parameters:
a new GstCuda.CudaPoolAllocator instance
Since : 1.24
gst_cuda_pool_allocator_new_for_virtual_memory
GstCudaPoolAllocator * gst_cuda_pool_allocator_new_for_virtual_memory (GstCudaContext * context, GstCudaStream * stream, const GstVideoInfo * info, const CUmemAllocationProp* prop, CUmemAllocationGranularity_flags granularity_flags)
Creates a new GstCudaPoolAllocator instance for virtual memory allocation.
Parameters:
context
–
stream
(
[allow-none])
–
info
–
prop
–
granularity_flags
–
a new GstCudaPoolAllocator instance
Since : 1.24
GstCuda.CudaPoolAllocator.prototype.new_for_virtual_memory
function GstCuda.CudaPoolAllocator.prototype.new_for_virtual_memory(context: GstCuda.CudaContext, stream: GstCuda.CudaStream, info: GstVideo.VideoInfo, prop: CudaGst.memAllocationProp, granularity_flags: CudaGst.memAllocationGranularity_flags): {
// javascript wrapper for 'gst_cuda_pool_allocator_new_for_virtual_memory'
}
Creates a new GstCuda.CudaPoolAllocator instance for virtual memory allocation.
Parameters:
prop
(CudaGst.memAllocationProp)
–
granularity_flags
(CudaGst.memAllocationGranularity_flags)
–
a new GstCuda.CudaPoolAllocator instance
Since : 1.24
GstCuda.CudaPoolAllocator.new_for_virtual_memory
def GstCuda.CudaPoolAllocator.new_for_virtual_memory (context, stream, info, prop, granularity_flags):
#python wrapper for 'gst_cuda_pool_allocator_new_for_virtual_memory'
Creates a new GstCuda.CudaPoolAllocator instance for virtual memory allocation.
Parameters:
prop
(CudaGst.memAllocationProp)
–
granularity_flags
(CudaGst.memAllocationGranularity_flags)
–
a new GstCuda.CudaPoolAllocator instance
Since : 1.24
gst_cuda_pool_allocator_new_full
GstCudaPoolAllocator * gst_cuda_pool_allocator_new_full (GstCudaContext * context, GstCudaStream * stream, const GstVideoInfo * info, GstStructure * config)
Creates a new GstCudaPoolAllocator instance with given config
Parameters:
context
–
stream
(
[allow-none])
–
info
–
config
(
[transfer: full][allow-none])
–
a GstStructure with configuration options
a new GstCudaPoolAllocator instance
Since : 1.26
GstCuda.CudaPoolAllocator.prototype.new_full
function GstCuda.CudaPoolAllocator.prototype.new_full(context: GstCuda.CudaContext, stream: GstCuda.CudaStream, info: GstVideo.VideoInfo, config: Gst.Structure): {
// javascript wrapper for 'gst_cuda_pool_allocator_new_full'
}
Creates a new GstCuda.CudaPoolAllocator instance with given config
a new GstCuda.CudaPoolAllocator instance
Since : 1.26
GstCuda.CudaPoolAllocator.new_full
def GstCuda.CudaPoolAllocator.new_full (context, stream, info, config):
#python wrapper for 'gst_cuda_pool_allocator_new_full'
Creates a new GstCuda.CudaPoolAllocator instance with given config
a new GstCuda.CudaPoolAllocator instance
Since : 1.26
Methods
gst_cuda_pool_allocator_acquire_memory
GstFlowReturn gst_cuda_pool_allocator_acquire_memory (GstCudaPoolAllocator * allocator, GstMemory ** memory)
Acquires a GstMemory from allocator. memory should point to a memory location that can hold a pointer to the new GstMemory.
a GstFlowReturn such as GST_FLOW_FLUSHING when the allocator is inactive.
Since : 1.24
GstCuda.CudaPoolAllocator.prototype.acquire_memory
function GstCuda.CudaPoolAllocator.prototype.acquire_memory(): {
// javascript wrapper for 'gst_cuda_pool_allocator_acquire_memory'
}
Acquires a Gst.Memory from allocator. memory should point to a memory location that can hold a pointer to the new Gst.Memory.
Parameters:
Returns a tuple made of:
a Gst.FlowReturn such as Gst.FlowReturn.FLUSHING when the allocator is inactive.
a Gst.FlowReturn such as Gst.FlowReturn.FLUSHING when the allocator is inactive.
Since : 1.24
GstCuda.CudaPoolAllocator.acquire_memory
def GstCuda.CudaPoolAllocator.acquire_memory (self):
#python wrapper for 'gst_cuda_pool_allocator_acquire_memory'
Acquires a Gst.Memory from allocator. memory should point to a memory location that can hold a pointer to the new Gst.Memory.
Parameters:
Returns a tuple made of:
a Gst.FlowReturn such as Gst.FlowReturn.FLUSHING when the allocator is inactive.
a Gst.FlowReturn such as Gst.FlowReturn.FLUSHING when the allocator is inactive.
Since : 1.24
Functions
gst_cuda_register_allocator_need_pool_callback
gst_cuda_register_allocator_need_pool_callback (GstCudaMemoryAllocatorNeedPoolCallback callback, gpointer user_data, GDestroyNotify notify)
Sets global need-pool callback function
Parameters:
callback
–
the callbacks
user_data
–
an user_data argument for the callback
notify
–
a destory notify function
Since : 1.26
gst_is_cuda_memory
gboolean gst_is_cuda_memory (GstMemory * mem)
Check if mem is a cuda memory
Parameters:
mem
–
Since : 1.22
GstCuda.prototype.is_cuda_memory
function GstCuda.prototype.is_cuda_memory(mem: Gst.Memory): {
// javascript wrapper for 'gst_is_cuda_memory'
}
Check if mem is a cuda memory
Parameters:
Since : 1.22
GstCuda.is_cuda_memory
def GstCuda.is_cuda_memory (mem):
#python wrapper for 'gst_is_cuda_memory'
Check if mem is a cuda memory
Parameters:
Since : 1.22
Function Macros
GST_CUDA_ALLOCATOR_CAST
#define GST_CUDA_ALLOCATOR_CAST(obj) ((GstCudaAllocator *)(obj))
Since : 1.22
GST_CUDA_MEMORY_CAST
#define GST_CUDA_MEMORY_CAST(mem) ((GstCudaMemory *) (mem))
Since : 1.22
Enumerations
GstCudaMemoryAllocMethod
CUDA memory allocation method
Members
GST_CUDA_MEMORY_ALLOC_UNKNOWN
(0)
–
GST_CUDA_MEMORY_ALLOC_MALLOC
(1)
–
Memory allocated via cuMemAlloc or cuMemAllocPitch
(Since: 1.24)GST_CUDA_MEMORY_ALLOC_MMAP
(2)
–
Memory allocated via cuMemCreate and cuMemMap
(Since: 1.24)Since : 1.24
GstCuda.CudaMemoryAllocMethod
CUDA memory allocation method
Members
GstCuda.CudaMemoryAllocMethod.UNKNOWN
(0)
–
GstCuda.CudaMemoryAllocMethod.MALLOC
(1)
–
Memory allocated via cuMemAlloc or cuMemAllocPitch
(Since: 1.24)GstCuda.CudaMemoryAllocMethod.MMAP
(2)
–
Memory allocated via cuMemCreate and cuMemMap
(Since: 1.24)Since : 1.24
GstCuda.CudaMemoryAllocMethod
CUDA memory allocation method
Members
GstCuda.CudaMemoryAllocMethod.UNKNOWN
(0)
–
GstCuda.CudaMemoryAllocMethod.MALLOC
(1)
–
Memory allocated via cuMemAlloc or cuMemAllocPitch
(Since: 1.24)GstCuda.CudaMemoryAllocMethod.MMAP
(2)
–
Memory allocated via cuMemCreate and cuMemMap
(Since: 1.24)Since : 1.24
GstCudaMemoryTransfer
CUDA memory transfer flags
Members
GST_CUDA_MEMORY_TRANSFER_NEED_DOWNLOAD
(1048576)
–
the device memory needs downloading to the staging memory
(Since: 1.22)GST_CUDA_MEMORY_TRANSFER_NEED_UPLOAD
(2097152)
–
the staging memory needs uploading to the device memory
(Since: 1.22)GST_CUDA_MEMORY_TRANSFER_NEED_SYNC
(4194304)
–
the device memory needs synchronization
(Since: 1.24)GstCuda.CudaMemoryTransfer
CUDA memory transfer flags
Members
GstCuda.CudaMemoryTransfer.DOWNLOAD
(1048576)
–
the device memory needs downloading to the staging memory
(Since: 1.22)GstCuda.CudaMemoryTransfer.UPLOAD
(2097152)
–
the staging memory needs uploading to the device memory
(Since: 1.22)GstCuda.CudaMemoryTransfer.SYNC
(4194304)
–
the device memory needs synchronization
(Since: 1.24)GstCuda.CudaMemoryTransfer
CUDA memory transfer flags
Members
GstCuda.CudaMemoryTransfer.DOWNLOAD
(1048576)
–
the device memory needs downloading to the staging memory
(Since: 1.22)GstCuda.CudaMemoryTransfer.UPLOAD
(2097152)
–
the staging memory needs uploading to the device memory
(Since: 1.22)GstCuda.CudaMemoryTransfer.SYNC
(4194304)
–
the device memory needs synchronization
(Since: 1.24)Constants
GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY
#define GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY "memory:CUDAMemory"
Name of the caps feature for indicating the use of GstCudaMemory
Since : 1.22
GstCuda.CAPS_FEATURE_MEMORY_CUDA_MEMORY
Name of the caps feature for indicating the use of GstCuda.CudaMemory
Since : 1.22
GstCuda.CAPS_FEATURE_MEMORY_CUDA_MEMORY
Name of the caps feature for indicating the use of GstCuda.CudaMemory
Since : 1.22
GST_CUDA_ALLOCATOR_OPT_STREAM_ORDERED
#define GST_CUDA_ALLOCATOR_OPT_STREAM_ORDERED "GstCudaAllocator.stream-ordered"
G_TYPE_BOOLEAN Allows stream ordered allocation. Default is FALSE
Since : 1.26
GstCuda.CUDA_ALLOCATOR_OPT_STREAM_ORDERED
G_TYPE_BOOLEAN (not introspectable) Allows stream ordered allocation. Default is false
Since : 1.26
GstCuda.CUDA_ALLOCATOR_OPT_STREAM_ORDERED
G_TYPE_BOOLEAN (not introspectable) Allows stream ordered allocation. Default is False
Since : 1.26
GST_CUDA_MEMORY_TYPE_NAME
#define GST_CUDA_MEMORY_TYPE_NAME "gst.cuda.memory"
Name of cuda memory type
Since : 1.22
GstCuda.CUDA_MEMORY_TYPE_NAME
Name of cuda memory type
Since : 1.22
GstCuda.CUDA_MEMORY_TYPE_NAME
Name of cuda memory type
Since : 1.22
GST_MAP_CUDA
#define GST_MAP_CUDA (GST_MAP_FLAG_LAST << 1)
Flag indicating that we should map the CUDA device memory instead of to system memory.
Combining GST_MAP_CUDA with GST_MAP_WRITE has the same semantics as though you are writing to CUDA device/host memory. Conversely, combining GST_MAP_CUDA with GST_MAP_READ has the same semantics as though you are reading from CUDA device/host memory
Since : 1.22
GstCuda.MAP_CUDA
Flag indicating that we should map the CUDA device memory instead of to system memory.
Combining GstCuda.MAP_CUDA with Gst.MapFlags.WRITE has the same semantics as though you are writing to CUDA device/host memory. Conversely, combining GstCuda.MAP_CUDA with Gst.MapFlags.READ has the same semantics as though you are reading from CUDA device/host memory
Since : 1.22
GstCuda.MAP_CUDA
Flag indicating that we should map the CUDA device memory instead of to system memory.
Combining GstCuda.MAP_CUDA with Gst.MapFlags.WRITE has the same semantics as though you are writing to CUDA device/host memory. Conversely, combining GstCuda.MAP_CUDA with Gst.MapFlags.READ has the same semantics as though you are reading from CUDA device/host memory
Since : 1.22
Callbacks
GstCudaMemoryAllocatorNeedPoolCallback
GstCudaMemoryPool * (*GstCudaMemoryAllocatorNeedPoolCallback) (GstCudaAllocator * allocator, GstCudaContext * context, gpointer user_data)
Called to request cuda memory pool object. If callee returns a memory pool, allocator will allocate memory via cuMemAllocFromPoolAsync. Otherwise device default memory pool will be used with cuMemAllocAsync method
Configured GstCudaMemoryPool object
Since : 1.26
The results of the search are