GstCudaAllocator

GObject
    ╰──GInitiallyUnowned
        ╰──GstObject
            ╰──GstAllocator
                ╰──GstCudaAllocator
                    ╰──GstCudaPoolAllocator

A GstAllocator subclass for cuda memory

Members

parent (GstAllocator) –
No description available

Since : 1.22


Class structure

GstCudaAllocatorClass

Fields
parent_class (GstAllocatorClass) –
No description available

GstCuda.CudaAllocatorClass

Attributes
parent_class (Gst.AllocatorClass) –
No description available

GstCuda.CudaAllocatorClass

Attributes
parent_class (Gst.AllocatorClass) –
No description available

GstCuda.CudaAllocator

GObject.Object
    ╰──GObject.InitiallyUnowned
        ╰──Gst.Object
            ╰──Gst.Allocator
                ╰──GstCuda.CudaAllocator
                    ╰──GstCuda.CudaPoolAllocator

A Gst.Allocator subclass for cuda memory

Members

parent (Gst.Allocator) –
No description available

Since : 1.22


GstCuda.CudaAllocator

GObject.Object
    ╰──GObject.InitiallyUnowned
        ╰──Gst.Object
            ╰──Gst.Allocator
                ╰──GstCuda.CudaAllocator
                    ╰──GstCuda.CudaPoolAllocator

A Gst.Allocator subclass for cuda memory

Members

parent (Gst.Allocator) –
No description available

Since : 1.22


Methods

gst_cuda_allocator_alloc

GstMemory *
gst_cuda_allocator_alloc (GstCudaAllocator * allocator,
                          GstCudaContext * context,
                          GstCudaStream * stream,
                          const GstVideoInfo * info)

Parameters:

allocator ( [transfer: none][allow-none])

a GstCudaAllocator

context ( [transfer: none])

a GstCudaContext

stream ( [transfer: none][allow-none])

a GstCudaStream

info

a GstVideoInfo

Returns ( [transfer: full][nullable])

a newly allocated GstCudaMemory

Since : 1.22


GstCuda.CudaAllocator.prototype.alloc

function GstCuda.CudaAllocator.prototype.alloc(context: GstCuda.CudaContext, stream: GstCuda.CudaStream, info: GstVideo.VideoInfo): {
    // javascript wrapper for 'gst_cuda_allocator_alloc'
}
Returns (Gst.Memory)

a newly allocated GstCuda.CudaMemory

Since : 1.22


GstCuda.CudaAllocator.alloc

def GstCuda.CudaAllocator.alloc (self, context, stream, info):
    #python wrapper for 'gst_cuda_allocator_alloc'
Returns (Gst.Memory)

a newly allocated GstCuda.CudaMemory

Since : 1.22


gst_cuda_allocator_alloc_wrapped

GstMemory *
gst_cuda_allocator_alloc_wrapped (GstCudaAllocator * allocator,
                                  GstCudaContext * context,
                                  GstCudaStream * stream,
                                  const GstVideoInfo * info,
                                  CUdeviceptr dev_ptr,
                                  gpointer user_data,
                                  GDestroyNotify notify)

Allocates a new memory that wraps the given CUDA device memory.

info must represent actual memory layout, in other words, offset, stride and size fields of info should be matched with memory layout of dev_ptr

By default, wrapped dev_ptr will be freed at the time when GstMemory is freed if notify is NULL. Otherwise, if caller sets notify, freeing dev_ptr is callers responsibility and default GstCudaAllocator will not free it.

Parameters:

allocator ( [transfer: none][allow-none])

a GstCudaAllocator

context ( [transfer: none])

a GstCudaContext

stream ( [transfer: none][allow-none])

a GstCudaStream

info

a GstVideoInfo

dev_ptr

a CUdeviceptr CUDA device memory

user_data ( [allow-none])

user data

notify

(allow-none) (scope async) (closure user_data): Called with user_data when the memory is freed

Returns ( [transfer: full])

a new GstMemory

Since : 1.24


GstCuda.CudaAllocator.prototype.alloc_wrapped

function GstCuda.CudaAllocator.prototype.alloc_wrapped(context: GstCuda.CudaContext, stream: GstCuda.CudaStream, info: GstVideo.VideoInfo, dev_ptr: CudaGst.deviceptr, user_data: Object, notify: GLib.DestroyNotify): {
    // javascript wrapper for 'gst_cuda_allocator_alloc_wrapped'
}

Allocates a new memory that wraps the given CUDA device memory.

info must represent actual memory layout, in other words, offset, stride and size fields of info should be matched with memory layout of dev_ptr

By default, wrapped dev_ptr will be freed at the time when Gst.Memory is freed if notify is null. Otherwise, if caller sets notify, freeing dev_ptr is callers responsibility and default GstCuda.CudaAllocator will not free it.

Parameters:

dev_ptr (CudaGst.deviceptr)

a CUdeviceptr CUDA device memory

user_data (Object)

user data

notify (GLib.DestroyNotify)

(allow-none) (scope async) (closure user_data): Called with user_data when the memory is freed

Returns (Gst.Memory)

a new Gst.Memory

Since : 1.24


GstCuda.CudaAllocator.alloc_wrapped

def GstCuda.CudaAllocator.alloc_wrapped (self, context, stream, info, dev_ptr, *user_data, notify):
    #python wrapper for 'gst_cuda_allocator_alloc_wrapped'

Allocates a new memory that wraps the given CUDA device memory.

info must represent actual memory layout, in other words, offset, stride and size fields of info should be matched with memory layout of dev_ptr

By default, wrapped dev_ptr will be freed at the time when Gst.Memory is freed if notify is None. Otherwise, if caller sets notify, freeing dev_ptr is callers responsibility and default GstCuda.CudaAllocator will not free it.

Parameters:

dev_ptr (CudaGst.deviceptr)

a CUdeviceptr CUDA device memory

user_data (variadic)

user data

notify (GLib.DestroyNotify)

(allow-none) (scope async) (closure user_data): Called with user_data when the memory is freed

Returns (Gst.Memory)

a new Gst.Memory

Since : 1.24


gst_cuda_allocator_set_active

gboolean
gst_cuda_allocator_set_active (GstCudaAllocator * allocator,
                               gboolean active)

Controls the active state of allocator. Default GstCudaAllocator is stateless and therefore active state is ignored, but subclass implementation (e.g., GstCudaPoolAllocator) will require explicit active state control for its internal resource management.

This method is conceptually identical to gst_buffer_pool_set_active method.

Parameters:

allocator

a GstCudaAllocator

active

the new active state

Returns

TRUE if active state of allocator was successfully updated.

Since : 1.24


GstCuda.CudaAllocator.prototype.set_active

function GstCuda.CudaAllocator.prototype.set_active(active: Number): {
    // javascript wrapper for 'gst_cuda_allocator_set_active'
}

Controls the active state of allocator. Default GstCuda.CudaAllocator is stateless and therefore active state is ignored, but subclass implementation (e.g., GstCuda.CudaPoolAllocator) will require explicit active state control for its internal resource management.

This method is conceptually identical to gst_buffer_pool_set_active method.

Parameters:

active (Number)

the new active state

Returns (Number)

true if active state of allocator was successfully updated.

Since : 1.24


GstCuda.CudaAllocator.set_active

def GstCuda.CudaAllocator.set_active (self, active):
    #python wrapper for 'gst_cuda_allocator_set_active'

Controls the active state of allocator. Default GstCuda.CudaAllocator is stateless and therefore active state is ignored, but subclass implementation (e.g., GstCuda.CudaPoolAllocator) will require explicit active state control for its internal resource management.

This method is conceptually identical to gst_buffer_pool_set_active method.

Parameters:

active (bool)

the new active state

Returns (bool)

True if active state of allocator was successfully updated.

Since : 1.24


gst_cuda_allocator_virtual_alloc

GstMemory *
gst_cuda_allocator_virtual_alloc (GstCudaAllocator * allocator,
                                  GstCudaContext * context,
                                  GstCudaStream * stream,
                                  const GstVideoInfo * info,
                                  const CUmemAllocationProp* prop,
                                  CUmemAllocationGranularity_flags granularity_flags)

Allocates new GstMemory object with CUDA virtual memory.

Parameters:

allocator

a GstCudaAllocator

context

a GstCudaContext

stream

a GstCudaStream

info

a GstVideoInfo

prop

allocation property

granularity_flags

allocation flags

Returns ( [transfer: full][nullable])

a newly allocated memory object or NULL if allocation is not supported

Since : 1.24


GstCuda.CudaAllocator.prototype.virtual_alloc

function GstCuda.CudaAllocator.prototype.virtual_alloc(context: GstCuda.CudaContext, stream: GstCuda.CudaStream, info: GstVideo.VideoInfo, prop: CudaGst.memAllocationProp, granularity_flags: CudaGst.memAllocationGranularity_flags): {
    // javascript wrapper for 'gst_cuda_allocator_virtual_alloc'
}

Allocates new Gst.Memory object with CUDA virtual memory.

Parameters:

prop (CudaGst.memAllocationProp)

allocation property

granularity_flags (CudaGst.memAllocationGranularity_flags)

allocation flags

Returns (Gst.Memory)

a newly allocated memory object or null if allocation is not supported

Since : 1.24


GstCuda.CudaAllocator.virtual_alloc

def GstCuda.CudaAllocator.virtual_alloc (self, context, stream, info, prop, granularity_flags):
    #python wrapper for 'gst_cuda_allocator_virtual_alloc'

Allocates new Gst.Memory object with CUDA virtual memory.

Parameters:

prop (CudaGst.memAllocationProp)

allocation property

granularity_flags (CudaGst.memAllocationGranularity_flags)

allocation flags

Returns (Gst.Memory)

a newly allocated memory object or None if allocation is not supported

Since : 1.24


Virtual Methods

set_active

gboolean
set_active (GstCudaAllocator * allocator,
            gboolean active)

Parameters:

allocator

a GstCudaAllocator

active

the new active state

Returns
No description available

Since : 1.24


vfunc_set_active

function vfunc_set_active(allocator: GstCuda.CudaAllocator, active: Number): {
    // javascript implementation of the 'set_active' virtual method
}

Parameters:

active (Number)

the new active state

Returns (Number)
No description available

Since : 1.24


do_set_active

def do_set_active (allocator, active):
    #python implementation of the 'set_active' virtual method

Parameters:

active (bool)

the new active state

Returns (bool)
No description available

Since : 1.24


GstCudaMemory

Members

mem (GstMemory) –
No description available
context (GstCudaContext *) –
No description available
info (GstVideoInfo) –
No description available

Since : 1.22


GstCuda.CudaMemory

Members

mem (Gst.Memory) –
No description available
context (GstCuda.CudaContext) –
No description available
info (GstVideo.VideoInfo) –
No description available

Since : 1.22


GstCuda.CudaMemory

Members

mem (Gst.Memory) –
No description available
context (GstCuda.CudaContext) –
No description available
info (GstVideo.VideoInfo) –
No description available

Since : 1.22


Methods

gst_cuda_memory_export

gboolean
gst_cuda_memory_export (GstCudaMemory * mem,
                        gpointer os_handle)

Exports virtual memory handle to OS specific handle.

On Windows, os_handle should be pointer to HANDLE (i.e., void **), and pointer to file descriptor (i.e., int *) on Linux.

The returned os_handle is owned by mem and therefore caller shouldn't close the handle.

Parameters:

mem

a GstCudaMemory

os_handle ( [out])

a pointer to OS handle

Returns

TRUE if successful

Since : 1.24


GstCuda.CudaMemory.prototype.export

function GstCuda.CudaMemory.prototype.export(): {
    // javascript wrapper for 'gst_cuda_memory_export'
}

Exports virtual memory handle to OS specific handle.

On Windows, os_handle should be pointer to HANDLE (i.e., void **), and pointer to file descriptor (i.e., int *) on Linux.

The returned os_handle is owned by mem and therefore caller shouldn't close the handle.

Parameters:

Returns a tuple made of:

(Number )

true if successful

os_handle (Object )

true if successful

Since : 1.24


GstCuda.CudaMemory.export

def GstCuda.CudaMemory.export (self):
    #python wrapper for 'gst_cuda_memory_export'

Exports virtual memory handle to OS specific handle.

On Windows, os_handle should be pointer to HANDLE (i.e., void **), and pointer to file descriptor (i.e., int *) on Linux.

The returned os_handle is owned by mem and therefore caller shouldn't close the handle.

Parameters:

Returns a tuple made of:

(bool )

True if successful

os_handle (object )

True if successful

Since : 1.24


gst_cuda_memory_get_alloc_method

GstCudaMemoryAllocMethod
gst_cuda_memory_get_alloc_method (GstCudaMemory * mem)

Query allocation method

Parameters:

mem

a GstCudaMemory

Returns
No description available

Since : 1.24


GstCuda.CudaMemory.prototype.get_alloc_method

function GstCuda.CudaMemory.prototype.get_alloc_method(): {
    // javascript wrapper for 'gst_cuda_memory_get_alloc_method'
}

Query allocation method

Parameters:

No description available

Since : 1.24


GstCuda.CudaMemory.get_alloc_method

def GstCuda.CudaMemory.get_alloc_method (self):
    #python wrapper for 'gst_cuda_memory_get_alloc_method'

Query allocation method

Parameters:

No description available

Since : 1.24


gst_cuda_memory_get_stream

GstCudaStream *
gst_cuda_memory_get_stream (GstCudaMemory * mem)

Gets CUDA stream object associated with mem

Parameters:

mem

A GstCudaMemory

Returns ( [transfer: none][nullable])

a GstCudaStream or NULL if default CUDA stream is in use

Since : 1.24


GstCuda.CudaMemory.prototype.get_stream

function GstCuda.CudaMemory.prototype.get_stream(): {
    // javascript wrapper for 'gst_cuda_memory_get_stream'
}

Gets CUDA stream object associated with mem

Parameters:

Returns (GstCuda.CudaStream)

a GstCuda.CudaStream or null if default CUDA stream is in use

Since : 1.24


GstCuda.CudaMemory.get_stream

def GstCuda.CudaMemory.get_stream (self):
    #python wrapper for 'gst_cuda_memory_get_stream'

Gets CUDA stream object associated with mem

Parameters:

Returns (GstCuda.CudaStream)

a GstCuda.CudaStream or None if default CUDA stream is in use

Since : 1.24


gst_cuda_memory_get_texture

gboolean
gst_cuda_memory_get_texture (GstCudaMemory * mem,
                             guint plane,
                             CUfilter_mode filter_mode,
                             CUtexObject* texture)

Creates CUtexObject with given parameters

Parameters:

mem

A GstCudaMemory

plane

the plane index

filter_mode

filter mode

texture ( [out][transfer: none])

a pointer to CUtexObject object

Returns

TRUE if successful

Since : 1.24


GstCuda.CudaMemory.prototype.get_texture

function GstCuda.CudaMemory.prototype.get_texture(plane: Number, filter_mode: CudaGst.filter_mode): {
    // javascript wrapper for 'gst_cuda_memory_get_texture'
}

Creates CUtexObject with given parameters

Parameters:

plane (Number)

the plane index

filter_mode (CudaGst.filter_mode)

filter mode

Returns a tuple made of:

(Number )

true if successful

texture (CudaGst.texObject )

true if successful

Since : 1.24


GstCuda.CudaMemory.get_texture

def GstCuda.CudaMemory.get_texture (self, plane, filter_mode):
    #python wrapper for 'gst_cuda_memory_get_texture'

Creates CUtexObject with given parameters

Parameters:

plane (int)

the plane index

filter_mode (CudaGst.filter_mode)

filter mode

Returns a tuple made of:

(bool )

True if successful

texture (CudaGst.texObject )

True if successful

Since : 1.24


gst_cuda_memory_get_token_data

gpointer
gst_cuda_memory_get_token_data (GstCudaMemory * mem,
                                gint64 token)

Gets back user data pointer stored via gst_cuda_memory_set_token_data

Parameters:

mem

a GstCudaMemory

token

an user token

Returns ( [transfer: none][nullable])

user data pointer or NULL

Since : 1.24


GstCuda.CudaMemory.prototype.get_token_data

function GstCuda.CudaMemory.prototype.get_token_data(token: Number): {
    // javascript wrapper for 'gst_cuda_memory_get_token_data'
}

Gets back user data pointer stored via GstCuda.CudaMemory.prototype.set_token_data

Parameters:

token (Number)

an user token

Returns (Object)

user data pointer or null

Since : 1.24


GstCuda.CudaMemory.get_token_data

def GstCuda.CudaMemory.get_token_data (self, token):
    #python wrapper for 'gst_cuda_memory_get_token_data'

Gets back user data pointer stored via GstCuda.CudaMemory.set_token_data

Parameters:

token (int)

an user token

Returns (object)

user data pointer or None

Since : 1.24


gst_cuda_memory_get_user_data

gpointer
gst_cuda_memory_get_user_data (GstCudaMemory * mem)

Gets user data pointer stored via gst_cuda_allocator_alloc_wrapped

Parameters:

mem

A GstCudaMemory

Returns ( [transfer: none][nullable])

the user data pointer

Since : 1.24


GstCuda.CudaMemory.prototype.get_user_data

function GstCuda.CudaMemory.prototype.get_user_data(): {
    // javascript wrapper for 'gst_cuda_memory_get_user_data'
}

Gets user data pointer stored via GstCuda.CudaAllocator.prototype.alloc_wrapped

Parameters:

Returns (Object)

the user data pointer

Since : 1.24


GstCuda.CudaMemory.get_user_data

def GstCuda.CudaMemory.get_user_data (self):
    #python wrapper for 'gst_cuda_memory_get_user_data'

Gets user data pointer stored via GstCuda.CudaAllocator.alloc_wrapped

Parameters:

Returns (object)

the user data pointer

Since : 1.24


gst_cuda_memory_set_token_data

gst_cuda_memory_set_token_data (GstCudaMemory * mem,
                                gint64 token,
                                gpointer data,
                                GDestroyNotify notify)

Sets an opaque user data on a GstCudaMemory

Parameters:

mem

a GstCudaMemory

token

an user token

data

an user data

notify

function to invoke with data as argument, when data needs to be freed

Since : 1.24


GstCuda.CudaMemory.prototype.set_token_data

function GstCuda.CudaMemory.prototype.set_token_data(token: Number, data: Object, notify: GLib.DestroyNotify): {
    // javascript wrapper for 'gst_cuda_memory_set_token_data'
}

Sets an opaque user data on a GstCuda.CudaMemory

Parameters:

token (Number)

an user token

data (Object)

an user data

notify (GLib.DestroyNotify)

function to invoke with data as argument, when data needs to be freed

Since : 1.24


GstCuda.CudaMemory.set_token_data

def GstCuda.CudaMemory.set_token_data (self, token, data, notify):
    #python wrapper for 'gst_cuda_memory_set_token_data'

Sets an opaque user data on a GstCuda.CudaMemory

Parameters:

token (int)

an user token

data (object)

an user data

notify (GLib.DestroyNotify)

function to invoke with data as argument, when data needs to be freed

Since : 1.24


gst_cuda_memory_sync

gst_cuda_memory_sync (GstCudaMemory * mem)

Performs synchronization if needed

Parameters:

mem

A GstCudaMemory

Since : 1.24


GstCuda.CudaMemory.prototype.sync

function GstCuda.CudaMemory.prototype.sync(): {
    // javascript wrapper for 'gst_cuda_memory_sync'
}

Performs synchronization if needed

Parameters:

Since : 1.24


GstCuda.CudaMemory.sync

def GstCuda.CudaMemory.sync (self):
    #python wrapper for 'gst_cuda_memory_sync'

Performs synchronization if needed

Parameters:

Since : 1.24


Functions

gst_cuda_memory_init_once

gst_cuda_memory_init_once ()

Ensures that the GstCudaAllocator is initialized and ready to be used.

Since : 1.22


GstCuda.CudaMemory.prototype.init_once

function GstCuda.CudaMemory.prototype.init_once(): {
    // javascript wrapper for 'gst_cuda_memory_init_once'
}

Ensures that the GstCuda.CudaAllocator is initialized and ready to be used.

Since : 1.22


GstCuda.CudaMemory.init_once

def GstCuda.CudaMemory.init_once ():
    #python wrapper for 'gst_cuda_memory_init_once'

Ensures that the GstCuda.CudaAllocator is initialized and ready to be used.

Since : 1.22


GstCudaPoolAllocator

GObject
    ╰──GInitiallyUnowned
        ╰──GstObject
            ╰──GstAllocator
                ╰──GstCudaAllocator
                    ╰──GstCudaPoolAllocator

A GstCudaAllocator subclass for cuda memory pool

Members

parent (GstCudaAllocator) –
No description available
context (GstCudaContext *) –
No description available
stream (GstCudaStream *) –
No description available
info (GstVideoInfo) –
No description available

Since : 1.24


Class structure

GstCudaPoolAllocatorClass

Fields
parent_class (GstCudaAllocatorClass) –
No description available

GstCuda.CudaPoolAllocatorClass

Attributes
parent_class (GstCuda.CudaAllocatorClass) –
No description available

GstCuda.CudaPoolAllocatorClass

Attributes
parent_class (GstCuda.CudaAllocatorClass) –
No description available

GstCuda.CudaPoolAllocator

GObject.Object
    ╰──GObject.InitiallyUnowned
        ╰──Gst.Object
            ╰──Gst.Allocator
                ╰──GstCuda.CudaAllocator
                    ╰──GstCuda.CudaPoolAllocator

A GstCuda.CudaAllocator subclass for cuda memory pool

Members

parent (GstCuda.CudaAllocator) –
No description available
context (GstCuda.CudaContext) –
No description available
stream (GstCuda.CudaStream) –
No description available
info (GstVideo.VideoInfo) –
No description available

Since : 1.24


GstCuda.CudaPoolAllocator

GObject.Object
    ╰──GObject.InitiallyUnowned
        ╰──Gst.Object
            ╰──Gst.Allocator
                ╰──GstCuda.CudaAllocator
                    ╰──GstCuda.CudaPoolAllocator

A GstCuda.CudaAllocator subclass for cuda memory pool

Members

parent (GstCuda.CudaAllocator) –
No description available
context (GstCuda.CudaContext) –
No description available
stream (GstCuda.CudaStream) –
No description available
info (GstVideo.VideoInfo) –
No description available

Since : 1.24


Constructors

gst_cuda_pool_allocator_new

GstCudaPoolAllocator *
gst_cuda_pool_allocator_new (GstCudaContext * context,
                             GstCudaStream * stream,
                             const GstVideoInfo * info)

Creates a new GstCudaPoolAllocator instance.

Parameters:

context

a GstCudaContext

stream ( [allow-none])

a GstCudaStream

info

a GstVideoInfo

Returns ( [transfer: full])

a new GstCudaPoolAllocator instance

Since : 1.24


GstCuda.CudaPoolAllocator.prototype.new

function GstCuda.CudaPoolAllocator.prototype.new(context: GstCuda.CudaContext, stream: GstCuda.CudaStream, info: GstVideo.VideoInfo): {
    // javascript wrapper for 'gst_cuda_pool_allocator_new'
}

Creates a new GstCuda.CudaPoolAllocator instance.

Since : 1.24


GstCuda.CudaPoolAllocator.new

def GstCuda.CudaPoolAllocator.new (context, stream, info):
    #python wrapper for 'gst_cuda_pool_allocator_new'

Creates a new GstCuda.CudaPoolAllocator instance.

Since : 1.24


gst_cuda_pool_allocator_new_for_virtual_memory

GstCudaPoolAllocator *
gst_cuda_pool_allocator_new_for_virtual_memory (GstCudaContext * context,
                                                GstCudaStream * stream,
                                                const GstVideoInfo * info,
                                                const CUmemAllocationProp* prop,
                                                CUmemAllocationGranularity_flags granularity_flags)

Creates a new GstCudaPoolAllocator instance for virtual memory allocation.

Parameters:

context

a GstCudaContext

stream ( [allow-none])

a GstCudaStream

info

a GstVideoInfo

prop
No description available
granularity_flags
No description available
Returns ( [transfer: full])

a new GstCudaPoolAllocator instance

Since : 1.24


GstCuda.CudaPoolAllocator.prototype.new_for_virtual_memory

function GstCuda.CudaPoolAllocator.prototype.new_for_virtual_memory(context: GstCuda.CudaContext, stream: GstCuda.CudaStream, info: GstVideo.VideoInfo, prop: CudaGst.memAllocationProp, granularity_flags: CudaGst.memAllocationGranularity_flags): {
    // javascript wrapper for 'gst_cuda_pool_allocator_new_for_virtual_memory'
}

Creates a new GstCuda.CudaPoolAllocator instance for virtual memory allocation.

Parameters:

prop (CudaGst.memAllocationProp)
No description available
granularity_flags (CudaGst.memAllocationGranularity_flags)
No description available

Since : 1.24


GstCuda.CudaPoolAllocator.new_for_virtual_memory

def GstCuda.CudaPoolAllocator.new_for_virtual_memory (context, stream, info, prop, granularity_flags):
    #python wrapper for 'gst_cuda_pool_allocator_new_for_virtual_memory'

Creates a new GstCuda.CudaPoolAllocator instance for virtual memory allocation.

Parameters:

prop (CudaGst.memAllocationProp)
No description available
granularity_flags (CudaGst.memAllocationGranularity_flags)
No description available

Since : 1.24


Methods

gst_cuda_pool_allocator_acquire_memory

GstFlowReturn
gst_cuda_pool_allocator_acquire_memory (GstCudaPoolAllocator * allocator,
                                        GstMemory ** memory)

Acquires a GstMemory from allocator. memory should point to a memory location that can hold a pointer to the new GstMemory.

Parameters:

allocator

a GstCudaPoolAllocator

memory ( [out])

a GstMemory

Returns

a GstFlowReturn such as GST_FLOW_FLUSHING when the allocator is inactive.

Since : 1.24


GstCuda.CudaPoolAllocator.prototype.acquire_memory

function GstCuda.CudaPoolAllocator.prototype.acquire_memory(): {
    // javascript wrapper for 'gst_cuda_pool_allocator_acquire_memory'
}

Acquires a Gst.Memory from allocator. memory should point to a memory location that can hold a pointer to the new Gst.Memory.

Returns a tuple made of:

a Gst.FlowReturn such as Gst.FlowReturn.FLUSHING when the allocator is inactive.

memory (Gst.Memory )

a Gst.FlowReturn such as Gst.FlowReturn.FLUSHING when the allocator is inactive.

Since : 1.24


GstCuda.CudaPoolAllocator.acquire_memory

def GstCuda.CudaPoolAllocator.acquire_memory (self):
    #python wrapper for 'gst_cuda_pool_allocator_acquire_memory'

Acquires a Gst.Memory from allocator. memory should point to a memory location that can hold a pointer to the new Gst.Memory.

Returns a tuple made of:

a Gst.FlowReturn such as Gst.FlowReturn.FLUSHING when the allocator is inactive.

memory (Gst.Memory )

a Gst.FlowReturn such as Gst.FlowReturn.FLUSHING when the allocator is inactive.

Since : 1.24


Functions

gst_is_cuda_memory

gboolean
gst_is_cuda_memory (GstMemory * mem)

Check if mem is a cuda memory

Parameters:

mem

A GstMemory

Returns
No description available

Since : 1.22


GstCuda.prototype.is_cuda_memory

function GstCuda.prototype.is_cuda_memory(mem: Gst.Memory): {
    // javascript wrapper for 'gst_is_cuda_memory'
}

Check if mem is a cuda memory

Parameters:

mem (Gst.Memory)

A Gst.Memory

Returns (Number)
No description available

Since : 1.22


GstCuda.is_cuda_memory

def GstCuda.is_cuda_memory (mem):
    #python wrapper for 'gst_is_cuda_memory'

Check if mem is a cuda memory

Parameters:

mem (Gst.Memory)

A Gst.Memory

Returns (bool)
No description available

Since : 1.22


Function Macros

GST_CUDA_ALLOCATOR_CAST

#define GST_CUDA_ALLOCATOR_CAST(obj)        ((GstCudaAllocator *)(obj))

Since : 1.22


GST_CUDA_MEMORY_CAST

#define GST_CUDA_MEMORY_CAST(mem)           ((GstCudaMemory *) (mem))

Since : 1.22


Enumerations

GstCudaMemoryAllocMethod

CUDA memory allocation method

Members
GST_CUDA_MEMORY_ALLOC_UNKNOWN (0) –
No description available
GST_CUDA_MEMORY_ALLOC_MALLOC (1) –

Memory allocated via cuMemAlloc or cuMemAllocPitch

(Since: 1.24)
GST_CUDA_MEMORY_ALLOC_MMAP (2) –

Memory allocated via cuMemCreate and cuMemMap

(Since: 1.24)

Since : 1.24


GstCuda.CudaMemoryAllocMethod

CUDA memory allocation method

Members
GstCuda.CudaMemoryAllocMethod.UNKNOWN (0) –
No description available
GstCuda.CudaMemoryAllocMethod.MALLOC (1) –

Memory allocated via cuMemAlloc or cuMemAllocPitch

(Since: 1.24)
GstCuda.CudaMemoryAllocMethod.MMAP (2) –

Memory allocated via cuMemCreate and cuMemMap

(Since: 1.24)

Since : 1.24


GstCuda.CudaMemoryAllocMethod

CUDA memory allocation method

Members
GstCuda.CudaMemoryAllocMethod.UNKNOWN (0) –
No description available
GstCuda.CudaMemoryAllocMethod.MALLOC (1) –

Memory allocated via cuMemAlloc or cuMemAllocPitch

(Since: 1.24)
GstCuda.CudaMemoryAllocMethod.MMAP (2) –

Memory allocated via cuMemCreate and cuMemMap

(Since: 1.24)

Since : 1.24


GstCudaMemoryTransfer

CUDA memory transfer flags

Members
GST_CUDA_MEMORY_TRANSFER_NEED_DOWNLOAD (1048576) –

the device memory needs downloading to the staging memory

(Since: 1.22)
GST_CUDA_MEMORY_TRANSFER_NEED_UPLOAD (2097152) –

the staging memory needs uploading to the device memory

(Since: 1.22)
GST_CUDA_MEMORY_TRANSFER_NEED_SYNC (4194304) –

the device memory needs synchronization

(Since: 1.24)

GstCuda.CudaMemoryTransfer

CUDA memory transfer flags

Members
GstCuda.CudaMemoryTransfer.DOWNLOAD (1048576) –

the device memory needs downloading to the staging memory

(Since: 1.22)
GstCuda.CudaMemoryTransfer.UPLOAD (2097152) –

the staging memory needs uploading to the device memory

(Since: 1.22)
GstCuda.CudaMemoryTransfer.SYNC (4194304) –

the device memory needs synchronization

(Since: 1.24)

GstCuda.CudaMemoryTransfer

CUDA memory transfer flags

Members
GstCuda.CudaMemoryTransfer.DOWNLOAD (1048576) –

the device memory needs downloading to the staging memory

(Since: 1.22)
GstCuda.CudaMemoryTransfer.UPLOAD (2097152) –

the staging memory needs uploading to the device memory

(Since: 1.22)
GstCuda.CudaMemoryTransfer.SYNC (4194304) –

the device memory needs synchronization

(Since: 1.24)

Constants

GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY

#define GST_CAPS_FEATURE_MEMORY_CUDA_MEMORY "memory:CUDAMemory"

Name of the caps feature for indicating the use of GstCudaMemory

Since : 1.22


GstCuda.CAPS_FEATURE_MEMORY_CUDA_MEMORY

Name of the caps feature for indicating the use of GstCuda.CudaMemory

Since : 1.22


GstCuda.CAPS_FEATURE_MEMORY_CUDA_MEMORY

Name of the caps feature for indicating the use of GstCuda.CudaMemory

Since : 1.22


GST_CUDA_MEMORY_TYPE_NAME

#define GST_CUDA_MEMORY_TYPE_NAME "gst.cuda.memory"

Name of cuda memory type

Since : 1.22


GstCuda.CUDA_MEMORY_TYPE_NAME

Name of cuda memory type

Since : 1.22


GstCuda.CUDA_MEMORY_TYPE_NAME

Name of cuda memory type

Since : 1.22


GST_MAP_CUDA

#define GST_MAP_CUDA (GST_MAP_FLAG_LAST << 1)

Flag indicating that we should map the CUDA device memory instead of to system memory.

Combining GST_MAP_CUDA with GST_MAP_WRITE has the same semantics as though you are writing to CUDA device/host memory. Conversely, combining GST_MAP_CUDA with GST_MAP_READ has the same semantics as though you are reading from CUDA device/host memory

Since : 1.22


GstCuda.MAP_CUDA

Flag indicating that we should map the CUDA device memory instead of to system memory.

Combining GstCuda.MAP_CUDA with Gst.MapFlags.WRITE has the same semantics as though you are writing to CUDA device/host memory. Conversely, combining GstCuda.MAP_CUDA with Gst.MapFlags.READ has the same semantics as though you are reading from CUDA device/host memory

Since : 1.22


GstCuda.MAP_CUDA

Flag indicating that we should map the CUDA device memory instead of to system memory.

Combining GstCuda.MAP_CUDA with Gst.MapFlags.WRITE has the same semantics as though you are writing to CUDA device/host memory. Conversely, combining GstCuda.MAP_CUDA with Gst.MapFlags.READ has the same semantics as though you are reading from CUDA device/host memory

Since : 1.22


The results of the search are