-
-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consolidate video.py and capture.py for local hardware acceleration #570
Comments
Via ChatGPT: To replace Here’s a conceptual outline on how to set this up:
Step-by-Step ImplementationFirst, modify your from Foundation import NSObject, NSLog
import AVFoundation as AVF
from Quartz import CGMainDisplayID
class SampleBufferDelegate(NSObject):
def captureOutput_didOutputSampleBuffer_fromConnection_(self, captureOutput, sampleBuffer, connection):
# This method is called with a CMSampleBufferRef `sampleBuffer`
# You can convert this to a screenshot here and call your desired callback
NSLog("Received a frame")
# Conversion to screenshot and callback call goes here
class Capture:
def __init__(self):
# Initialize as before...
self.videoDataOutput = None
self.videoDataOutputQueue = None
self.sampleBufferDelegate = None
def start(self, audio: bool = False, camera: bool = False):
# Setup as before...
# Setup video data output
self.videoDataOutput = AVF.AVCaptureVideoDataOutput.alloc().init()
self.videoDataOutputQueue = AVF.dispatch_queue_create("videoDataOutputQueue", None)
self.sampleBufferDelegate = SampleBufferDelegate.alloc().init()
self.videoDataOutput.setSampleBufferDelegate_queue_(self.sampleBufferDelegate, self.videoDataOutputQueue)
if self.session.canAddOutput_(self.videoDataOutput):
self.session.addOutput_(self.videoDataOutput) Notes:
This approach allows you to intercept video frames as they are captured, enabling you to process and use them as screenshots within your application. |
@0dm thoughts? 🙏 😄 |
This could work. I will look into implementing this sometime this week. |
Regarding this:
See |
@Cody-DV for a Windows approach see: https://github.com/OpenAdaptAI/OpenAdapt/blob/main/openadapt/capture/_windows.py https://chat.openai.com/share/19cc37a0-750f-451a-95cf-acad27efb7b6
We can replace the cv2 writer with what we have in https://github.com/OpenAdaptAI/OpenAdapt/blob/main/openadapt/video.py |
Feature request
capture/_macos.py
usesAVFoundation
,capture/_windows.py
usesscreen_recorder_sdk
which usesMediaFoundationAPI
. These are likely to be more performant thanmss
used inrecord.py
andvideo.py
, but currentlycapture
does not support extracting time aligned screenshots (whilevideo
does):This issue will be complete once we have modified these files to support saving video files recorded via
openadapt.capture
from which time-aligned screenshots can be extracted. i.e. we need to modifyopenadapt.capture._macos.Capture
andopenadapt.capture._windows.Capture
to supply screenshots in memory instead of file, e.g.self.session.addOutput_(self.file_output)
.Motivation
Local hardware acceleration -> maximum performance
The text was updated successfully, but these errors were encountered: