問題描述
我正在通過提供 mjpeg 流的網絡攝像頭捕獲視頻.我在工作線程中進行了視頻捕獲.我這樣開始捕獲:
I am capturing video through a webcam which gives a mjpeg stream. I did the video capture in a worker thread. I start the capture like this:
const std::string videoStreamAddress = "http://192.168.1.173:80/live/0/mjpeg.jpg?x.mjpeg";
qDebug() << "start";
cap.open(videoStreamAddress);
qDebug() << "really started";
cap.set(CV_CAP_PROP_FRAME_WIDTH, 720);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 576);
相機以 20fps 的速度饋送視頻流.但是如果我像這樣以 20fps 的速度閱讀:
the camera is feeding the stream at 20fps. But if I did the reading in 20fps like this:
if (!cap.isOpened()) return;
Mat frame;
cap >> frame; // get a new frame from camera
mutex.lock();
m_imageFrame = frame;
mutex.unlock();
然后有 3 秒以上的延遲.原因是采集到的視頻首先存放在一個緩沖區中.當我第一次啟動相機時,緩沖區是累積的,但我沒有把幀讀出來.所以如果我從緩沖區讀取它總是給我舊的幀.我現在唯一的解決方案是以 30fps 讀取緩沖區,這樣它就會快速清理緩沖區,并且不會出現更嚴重的延遲.
Then there is a 3+ seconds lag. The reason is that the captured video is first stored in a buffer.When I first start the camera, the buffer is accumulated but I did not read the frames out. So If I read from the buffer it always gives me the old frames. The only solutions I have now is to read the buffer at 30fps so it will clean the buffer quickly and there's no more serious lag.
有沒有其他可能的解決方案,以便我每次啟動相機時都可以手動清理/刷新緩沖區?
Is there any other possible solution so that I could clean/flush the buffer manually each time I start the camera?
推薦答案
OpenCV 解決方案
根據這個源,您可以設置cv::VideoCapture
對象的緩沖區大小.
OpenCV Solution
According to this source, you can set the buffersize of a cv::VideoCapture
object.
cv::VideoCapture cap;
cap.set(CV_CAP_PROP_BUFFERSIZE, 3); // internal buffer will now store only 3 frames
// rest of your code...
但是有一個重要的限制:
There is an important limitation however:
CV_CAP_PROP_BUFFERSIZE 存儲在內部緩沖存儲器中的幀數(注意:目前僅支持 DC1394 v 2.x 后端)
CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently)
從評論中更新.在較新版本的 OpenCV (3.4+) 中,限制似乎消失了,代碼使用了作用域枚舉:
Update from comments. In newer versions of OpenCV (3.4+), the limitation seems to be gone and the code uses scoped enumerations:
cv::VideoCapture cap;
cap.set(cv::CAP_PROP_BUFFERSIZE, 3);
<小時>
解決方法 1
如果解決方案不起作用,請查看這篇博文 解釋了如何解決這個問題.
Hackaround 1
If the solution does not work, take a look at this post that explains how to hack around the issue.
簡而言之:測量查詢一幀所需的時間;如果它太低,則表示該幀是從緩沖區中讀取的,可以丟棄.繼續查詢幀,直到測量的時間超過某個限制.發生這種情況時,緩沖區為空,返回的幀是最新的.
In a nutshell: the time needed to query a frame is measured; if it is too low, it means the frame was read from the buffer and can be discarded. Continue querying frames until the time measured exceeds a certain limit. When this happens, the buffer was empty and the returned frame is up to date.
(鏈接帖子上的答案顯示:從緩沖區返回幀的時間大約是返回最新幀的時間的 1/8.當然,您的里程可能會有所不同!)
(The answer on the linked post shows: returning a frame from the buffer takes about 1/8th the time of returning an up to date frame. Your mileage may vary, of course!)
一個不同的解決方案,靈感來自這篇帖子, 是創建第三個線程,高速連續抓取幀,保持緩沖區為空.這個線程應該使用 cv::VideoCapture.grab()
以避免開銷.
A different solution, inspired by this post, is to create a third thread that grabs frames continuously at high speed to keep the buffer empty. This thread should use the cv::VideoCapture.grab()
to avoid overhead.
您可以使用一個簡單的自旋鎖來同步真正的工作線程和第三個線程之間的閱讀幀.
You could use a simple spin-lock to synchronize reading frames between the real worker thread and the third thread.
這篇關于由于捕獲緩沖區,OpenCV VideoCapture 滯后的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!