問題描述
將圖像顯示到 Qt 小部件的最快方法是什么?我已經(jīng)使用 libavformat 和 libavcodec 解碼了視頻,所以我已經(jīng)有了原始 RGB 或 YCbCr 4:2:0 幀.我目前正在使用 QGraphicsView 和包含 QGraphicsPixmapItem 的 QGraphicsScene 對象.我目前正在通過使用內(nèi)存緩沖區(qū)中的 QImage 構(gòu)造函數(shù)將幀數(shù)據(jù)獲取到 QPixmap 中,并使用 QPixmap::fromImage() 將其轉(zhuǎn)換為 QPixmap.
What is the fastest way to display images to a Qt widget? I have decoded the video using libavformat and libavcodec, so I already have raw RGB or YCbCr 4:2:0 frames. I am currently using a QGraphicsView with a QGraphicsScene object containing a QGraphicsPixmapItem. I am currently getting the frame data into a QPixmap by using the QImage constructor from a memory buffer and converting it to QPixmap using QPixmap::fromImage().
我喜歡這樣的結(jié)果,而且看起來比較快,但我不禁想到一定有更有效的方法.我還聽說 QImage 到 QPixmap 的轉(zhuǎn)換很昂貴.我已經(jīng)實(shí)現(xiàn)了一個(gè)在小部件上使用 SDL 覆蓋的解決方案,但我想只使用 Qt,因?yàn)槲夷軌蚴褂?QGraphicsView 輕松捕獲點(diǎn)擊和其他用戶與視頻顯示的交互.
I like the results of this and it seems relatively fast, but I can't help but think that there must be a more efficient way. I've also heard that the QImage to QPixmap conversion is expensive. I have implemented a solution that uses an SDL overlay on a widget, but I'd like to stay with just Qt since I am able to easily capture clicks and other user interaction with the video display using the QGraphicsView.
我正在使用 libswscale 進(jìn)行任何所需的視頻縮放或色彩空間轉(zhuǎn)換,所以我只想知道是否有人有更有效的方法在執(zhí)行完所有處理后顯示圖像數(shù)據(jù).
I am doing any required video scaling or colorspace conversions with libswscale so I would just like to know if anyone has a more efficient way to display the image data after all processing has been performed.
謝謝.
推薦答案
感謝您的回答,但我終于重新審視了這個(gè)問題,并提出了一個(gè)相當(dāng)簡單的解決方案,可以提供良好的性能.它涉及從 QGLWidget
派生并覆蓋 paintEvent()
函數(shù).在paintEvent()
函數(shù)中,您可以調(diào)用QPainter::drawImage(...)
,它會使用硬件(如果可用)為您執(zhí)行縮放到指定的矩形.所以它看起來像這樣:
Thanks for the answers, but I finally revisited this problem and came up with a rather simple solution that gives good performance. It involves deriving from QGLWidget
and overriding the paintEvent()
function. Inside the paintEvent()
function, you can call QPainter::drawImage(...)
and it will perform the scaling to a specified rectangle for you using hardware if available. So it looks something like this:
class QGLCanvas : public QGLWidget
{
public:
QGLCanvas(QWidget* parent = NULL);
void setImage(const QImage& image);
protected:
void paintEvent(QPaintEvent*);
private:
QImage img;
};
QGLCanvas::QGLCanvas(QWidget* parent)
: QGLWidget(parent)
{
}
void QGLCanvas::setImage(const QImage& image)
{
img = image;
}
void QGLCanvas::paintEvent(QPaintEvent*)
{
QPainter p(this);
//Set the painter to use a smooth scaling algorithm.
p.setRenderHint(QPainter::SmoothPixmapTransform, 1);
p.drawImage(this->rect(), img);
}
有了這個(gè),我仍然需要將 YUV 420P 轉(zhuǎn)換為 RGB32,但是 ffmpeg 在 libswscale 中非常快速地實(shí)現(xiàn)了這種轉(zhuǎn)換.主要收益來自兩件事:
With this, I still have to convert the YUV 420P to RGB32, but ffmpeg has a very fast implementation of that conversion in libswscale. The major gains come from two things:
- 無需軟件縮放.縮放是在視頻卡上完成的(如果有)
- 從
QImage
到QPixmap
的轉(zhuǎn)換,在QPainter::drawImage()
函數(shù)中發(fā)生的轉(zhuǎn)換是在原始圖像分辨率下執(zhí)行的與升級的全屏分辨率相反.
- No need for software scaling. Scaling is done on the video card (if available)
- Conversion from
QImage
toQPixmap
, which is happening in theQPainter::drawImage()
function is performed at the original image resolution as opposed to the upscaled fullscreen resolution.
我用我以前的方法將我的處理器固定在顯示器上(解碼是在另一個(gè)線程中完成的).現(xiàn)在,我的顯示線程僅使用大約 8-9% 的內(nèi)核進(jìn)行全屏 1920x1200 30fps 播放.我敢肯定,如果我可以將 YUV 數(shù)據(jù)直接發(fā)送到視頻卡,它可能會變得更好,但現(xiàn)在已經(jīng)足夠了.
I was pegging my processor on just the display (decoding was being done in another thread) with my previous method. Now my display thread only uses about 8-9% of a core for fullscreen 1920x1200 30fps playback. I'm sure it could probably get even better if I could send the YUV data straight to the video card, but this is plenty good enough for now.
這篇關(guān)于在 Qt 中顯示解碼視頻幀的最有效方法是什么?的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網(wǎng)!