問題描述
我正在嘗試對一組點進行透視變換,以實現deskewing 效果:
I'm trying to do a perspective transformation of a set of points in order to achieve a deskewing effect:
http://nuigroup.com/?ACT=28&fid=27&aid=1892_H6eNAaign4Mrnn30Au8d
我使用下圖進行測試,綠色矩形顯示感興趣的區域.
I'm using the image below for tests, and the green rectangle display the area of interest.
我想知道是否可以使用 cv::getPerspectiveTransform
和 cv::warpPerspective
的簡單組合來達到我希望的效果.我正在分享我到目前為止編寫的源代碼,但它不起作用.這是生成的圖像:
I was wondering if it's possible to achieve the effect I'm hoping for using a simple combination of cv::getPerspectiveTransform
and cv::warpPerspective
. I'm sharing the source code I've written so far, but it doesn't work. This is the resulting image:
所以有一個vector
定義感興趣的區域,但是這些點沒有以任何特定的順序存儲strong> 在向量內部,這是我無法在檢測過程中更改的內容.無論如何,稍后,向量中的點被用來定義一個RotatedRect
,它依次用來組裝cv::Point2f src_vertices[4];
,cv::getPerspectiveTransform()
需要的變量之一.
So there is a vector<cv::Point>
that defines the region of interest, but the points are not stored in any particular order inside the vector, and that's something I can't change in the detection procedure. Anyway, later, the points in the vector are used to define a RotatedRect
, which in turn is used to assemble cv::Point2f src_vertices[4];
, one of the variables required by cv::getPerspectiveTransform()
.
我對頂點及其組織方式的理解可能是問題之一.我也認為使用 RotatedRect
不是最好的主意來存儲 ROI 的原始點,因為 坐標會改變一點點以適應旋轉的矩形,這不是很酷.
My understanding about vertices and how they are organized might be one of the issues. I also think that using a RotatedRect
is not the best idea to store the original points of the ROI, since the coordinates will change a little bit to fit into the rotated rectangle, and that's not very cool.
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char* argv[])
{
cv::Mat src = cv::imread(argv[1], 1);
// After some magical procedure, these are points detect that represent
// the corners of the paper in the picture:
// [408, 69] [72, 2186] [1584, 2426] [1912, 291]
vector<Point> not_a_rect_shape;
not_a_rect_shape.push_back(Point(408, 69));
not_a_rect_shape.push_back(Point(72, 2186));
not_a_rect_shape.push_back(Point(1584, 2426));
not_a_rect_shape.push_back(Point(1912, 291));
// For debugging purposes, draw green lines connecting those points
// and save it on disk
const Point* point = ¬_a_rect_shape[0];
int n = (int)not_a_rect_shape.size();
Mat draw = src.clone();
polylines(draw, &point, &n, 1, true, Scalar(0, 255, 0), 3, CV_AA);
imwrite("draw.jpg", draw);
// Assemble a rotated rectangle out of that info
RotatedRect box = minAreaRect(cv::Mat(not_a_rect_shape));
std::cout << "Rotated box set to (" << box.boundingRect().x << "," << box.boundingRect().y << ") " << box.size.width << "x" << box.size.height << std::endl;
// Does the order of the points matter? I assume they do NOT.
// But if it does, is there an easy way to identify and order
// them as topLeft, topRight, bottomRight, bottomLeft?
cv::Point2f src_vertices[4];
src_vertices[0] = not_a_rect_shape[0];
src_vertices[1] = not_a_rect_shape[1];
src_vertices[2] = not_a_rect_shape[2];
src_vertices[3] = not_a_rect_shape[3];
Point2f dst_vertices[4];
dst_vertices[0] = Point(0, 0);
dst_vertices[1] = Point(0, box.boundingRect().width-1);
dst_vertices[2] = Point(0, box.boundingRect().height-1);
dst_vertices[3] = Point(box.boundingRect().width-1, box.boundingRect().height-1);
Mat warpMatrix = getPerspectiveTransform(src_vertices, dst_vertices);
cv::Mat rotated;
warpPerspective(src, rotated, warpMatrix, rotated.size(), INTER_LINEAR, BORDER_CONSTANT);
imwrite("rotated.jpg", rotated);
return 0;
}
有人可以幫我解決這個問題嗎?
Can someone help me fix this problem?
推薦答案
所以,第一個問題是角點順序.它們在兩個向量中的順序必須相同.因此,如果在第一個向量中您的順序是:(左上、左下、右下、右上),則它們在另一個向量中的順序必須相同.
So, first problem is corner order. They must be in the same order in both vectors. So, if in the first vector your order is:(top-left, bottom-left, bottom-right, top-right) , they MUST be in the same order in the other vector.
其次,要使生成的圖像僅包含感興趣的對象,您必須將其寬度和高度設置為與生成的矩形寬度和高度相同.不用擔心,warpPerspective 中的 src 和 dst 圖像可以是不同的大小.
Second, to have the resulting image contain only the object of interest, you must set its width and height to be the same as resulting rectangle width and height. Do not worry, the src and dst images in warpPerspective can be different sizes.
第三,性能問題.雖然您的方法絕對準確,因為您只進行仿射變換(旋轉、調整大小、去歪斜),但在數學上,您可以使用函數的仿射對應.它們更快.
Third, a performance concern. While your method is absolutely accurate, because you are doing only affine transforms (rotate, resize, deskew), mathematically, you can use the affine corespondent of your functions. They are much faster.
getAffineTransform()
getAffineTransform()
warpAffine().
warpAffine().
重要提示:getAffine 變換只需要和期望 3 個點,結果矩陣是 2×3,而不是 3×3.
Important note: getAffine transform needs and expects ONLY 3 points, and the result matrix is 2-by-3, instead of 3-by-3.
如何使結果圖像與輸入的大小不同:
How to make the result image have a different size than the input:
cv::warpPerspective(src, dst, dst.size(), ... );
使用
cv::Mat rotated;
cv::Size size(box.boundingRect().width, box.boundingRect().height);
cv::warpPerspective(src, dst, size, ... );
到這里,你的編程任務就結束了.
So here you are, and your programming assignment is over.
void main()
{
cv::Mat src = cv::imread("r8fmh.jpg", 1);
// After some magical procedure, these are points detect that represent
// the corners of the paper in the picture:
// [408, 69] [72, 2186] [1584, 2426] [1912, 291]
vector<Point> not_a_rect_shape;
not_a_rect_shape.push_back(Point(408, 69));
not_a_rect_shape.push_back(Point(72, 2186));
not_a_rect_shape.push_back(Point(1584, 2426));
not_a_rect_shape.push_back(Point(1912, 291));
// For debugging purposes, draw green lines connecting those points
// and save it on disk
const Point* point = ¬_a_rect_shape[0];
int n = (int)not_a_rect_shape.size();
Mat draw = src.clone();
polylines(draw, &point, &n, 1, true, Scalar(0, 255, 0), 3, CV_AA);
imwrite("draw.jpg", draw);
// Assemble a rotated rectangle out of that info
RotatedRect box = minAreaRect(cv::Mat(not_a_rect_shape));
std::cout << "Rotated box set to (" << box.boundingRect().x << "," << box.boundingRect().y << ") " << box.size.width << "x" << box.size.height << std::endl;
Point2f pts[4];
box.points(pts);
// Does the order of the points matter? I assume they do NOT.
// But if it does, is there an easy way to identify and order
// them as topLeft, topRight, bottomRight, bottomLeft?
cv::Point2f src_vertices[3];
src_vertices[0] = pts[0];
src_vertices[1] = pts[1];
src_vertices[2] = pts[3];
//src_vertices[3] = not_a_rect_shape[3];
Point2f dst_vertices[3];
dst_vertices[0] = Point(0, 0);
dst_vertices[1] = Point(box.boundingRect().width-1, 0);
dst_vertices[2] = Point(0, box.boundingRect().height-1);
/* Mat warpMatrix = getPerspectiveTransform(src_vertices, dst_vertices);
cv::Mat rotated;
cv::Size size(box.boundingRect().width, box.boundingRect().height);
warpPerspective(src, rotated, warpMatrix, size, INTER_LINEAR, BORDER_CONSTANT);*/
Mat warpAffineMatrix = getAffineTransform(src_vertices, dst_vertices);
cv::Mat rotated;
cv::Size size(box.boundingRect().width, box.boundingRect().height);
warpAffine(src, rotated, warpAffineMatrix, size, INTER_LINEAR, BORDER_CONSTANT);
imwrite("rotated.jpg", rotated);
}
這篇關于執行 cv::warpPerspective 以在一組 cv::Point 上進行假糾偏的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!