問題描述
std::vector::size()
返回一個 size_type
,它是無符號的,通常與 size_t
相同,例如在 64 位平臺上為 8 個字節.
相比之下,QVector::size()
返回一個 int
,即使在 64 位平臺上通常也是 4 個字節,并且它是有符號的,這意味著它可以只到 2^32 的一半.
這是為什么?這似乎很不合邏輯,而且在技術上也有限制,雖然您可能永遠需要超過 2^32 個元素的可能性也不大,但使用signed int 將這個范圍減少了一半,但沒有明顯的充分理由.也許是為了避免那些懶得將 i
聲明為 uint
而不是 int
的人的編譯器警告,他們決定讓所有容器返回一個大小類型這毫無意義是更好的解決方案嗎?原因不可能這么蠢?
至少從 Qt 3 開始,這個問題已經討論過多次,并且 QtCore 維護者表示,不久前 Qt 7 之前不會發生任何變化,如果有的話.
當時進行討論時,我認為遲早有人會在 Stack Overflow 上提出這個問題……而且可能還會在其他幾個論壇和問答中提出.讓我們試著揭開這種情況的神秘面紗.
總的來說,您需要了解這里沒有更好或更壞,因為 QVector
不是 std::vector
的替代品.后者不做任何寫時復制 (COW),而且這是有代價的.基本上,它適用于不同的用例.它主要用于 Qt 應用程序和框架本身,早期最初用于 QWidgets.
size_t
也有它自己的問題,畢竟我將在下面指出.
我不給你解釋維護者,我直接引用蒂亞戈來承載官方立場:
<塊引用>有兩個原因:
1) 它是有符號的,因為我們需要在 API 的幾個地方使用負值:indexOf() 返回 -1 表示未找到值;許多來自"參數可以采用負值來表示從末尾開始計數.所以即使如果我們使用 64 位整數,我們需要它的有符號版本.那就是POSIX ssize_t 或 Qt qintptr.
當您將無符號數隱式轉換為無符號數時,這也避免了符號更改警告簽名:
-1 + size_t_variable =>警告size_t_variable - 1 =>沒有警告
<塊引用>
2) 它只是int"以避免與相關的轉換警告或丑陋的代碼使用大于 int 的整數.
io/qfilesystemiterator_unix.cpp
size_t maxPathName = ::pathconf(nativePath.constData(), _PC_NAME_MAX);if (maxPathName == size_t(-1))
io/qfsfileengine.cpp
if (len <0 || len != qint64(size_t(len))) {
io/qiodevice.cpp
qint64 QIODevice::bytesToWrite() const{返回 qint64(0);}返回 readSoFar ?readSoFar : qint64(-1);
那是來自蒂亞戈的一封電子郵件,然后是還有另一封 在這里你可以找到一些詳細的答案:
<塊引用>即使在今天,核心內存超過 4 GB(甚至 2 GB)的軟件是一個例外,而不是規則.看的時候請小心一些處理工具的內存大小,因為它們不代表實際內存用法.
無論如何,我們在這里談論的是有一個單一的容器尋址超過 2 GB 的內存.由于隱式共享 &寫時復制Qt 容器的性質,這可能會非常低效.你需要在編寫此類代碼時要非常小心,以避免觸發 COW,從而使您的內存使用量增加一倍或更糟.此外,Qt 容器不處理 OOM情況,所以如果你接近你的內存限制,Qt 容器是使用錯誤的工具.
我系統上最大的進程是 qtcreator,它也是唯一的一個在 VSZ (4791 MB) 中超過 4 GB 標記的文件.你可以爭辯說這是一個表明需要 64 位容器,但您錯了:
Qt Creator 沒有任何需要 64 位大小的容器,它只是需要 64 位指針
它沒有使用 4 GB 的內存.那只是 VSZ(映射內存).總數Creator 目前可訪問的 RAM 僅為 348.7 MB.
而且它使用了超過 4 GB 的虛擬空間因為它是一個 64 位應用.因果關系與你所想的相反預計.為了證明這一點,我檢查了消耗了多少虛擬空間填充:800 MB.32 位應用程序永遠不會這樣做,那是 19.5%4 GB 上的可尋址空間.
(填充是分配的虛擬空間,但沒有任何支持;它只是以便其他內容不會映射到這些頁面)
通過蒂亞戈的回答更深入地探討這個話題,請看:
<塊引用><塊引用><塊引用>就我個人而言,我很高興 Qt 集合大小已簽名.它似乎對我來說,一個可能在表達式中使用的整數值使用減法未簽名(例如 size_t).
無符號整數并不能保證表達式涉及該整數永遠不會是負數.它只保證結果將是一場絕對的災難.
另一方面,C 和 C++ 標準定義了 unsigned 的行為上溢和下溢.
有符號整數不會上溢或下溢.我的意思是,他們這樣做是因為類型和 CPU 寄存器的位數有限,但標準說它們別.這意味著編譯器將始終優化假設您沒有過度 -或下溢它們.
示例:
for (int i = 1; i >= 1; ++i)
<塊引用>
這被優化為無限循環,因為有符號整數不會溢出.如果將其更改為無符號,則編譯器知道它可能會溢出然后歸零.
有些人不喜歡這樣:http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30475
std::vector::size()
returns a size_type
which is unsigned and usually the same as size_t
, e.g. it is 8 bytes on 64bit platforms.
In constrast, QVector::size()
returns an int
which is usually 4 bytes even on 64bit platforms, and at that it is signed, which means it can only go half way to 2^32.
Why is that? This seems quite illogical and also technically limiting, and while it is nor very likely that you may ever need more than 2^32 number of elements, the usage of signed int cuts that range in half for no apparent good reason. Perhaps to avoid compiler warnings for people too lazy to declare i
as a uint
rather than an int
who decided that making all containers return a size type that makes no sense is a better solution? The reason could not possibly be that dumb?
This has been discussed several times since Qt 3 at least and the QtCore maintainer expressed that a while ago no change would happen until Qt 7 if it ever does.
When the discussion was going on back then, I thought that someone would bring it up on Stack Overflow sooner or later... and probably on several other forums and Q/A, too. Let us try to demystify the situation.
In general you need to understand that there is no better or worse here as QVector
is not a replacement for std::vector
. The latter does not do any Copy-On-Write (COW) and that comes with a price. It is meant for a different use case, basically. It is mostly used inside Qt applications and the framework itself, initially for QWidgets in the early times.
size_t
has its own issue, too, after all that I will indicate below.
Without me interpreting the maintainer to you, I will just quote Thiago directly to carry the message of the official stance on:
For two reasons:
1) it's signed because we need negative values in several places in the API: indexOf() returns -1 to indicate a value not found; many of the "from" parameters can take negative values to indicate counting from the end. So even if we used 64-bit integers, we'd need the signed version of it. That's the POSIX ssize_t or the Qt qintptr.
This also avoids sign-change warnings when you implicitly convert unsigneds to signed:
-1 + size_t_variable => warning
size_t_variable - 1 => no warning
2) it's simply "int" to avoid conversion warnings or ugly code related to the use of integers larger than int.
io/qfilesystemiterator_unix.cpp
size_t maxPathName = ::pathconf(nativePath.constData(), _PC_NAME_MAX);
if (maxPathName == size_t(-1))
io/qfsfileengine.cpp
if (len < 0 || len != qint64(size_t(len))) {
io/qiodevice.cpp
qint64 QIODevice::bytesToWrite() const
{
return qint64(0);
}
return readSoFar ? readSoFar : qint64(-1);
That was one email from Thiago and then there is another where you can find some detailed answer:
Even today, software that has a core memory of more than 4 GB (or even 2 GB) is an exception, rather than the rule. Please be careful when looking at the memory sizes of some process tools, since they do not represent actual memory usage.
In any case, we're talking here about having one single container addressing more than 2 GB of memory. Because of the implicitly shared & copy-on-write nature of the Qt containers, that will probably be highly inefficient. You need to be very careful when writing such code to avoid triggering COW and thus doubling or worse your memory usage. Also, the Qt containers do not handle OOM situations, so if you're anywhere close to your memory limit, Qt containers are the wrong tool to use.
The largest process I have on my system is qtcreator and it's also the only one that crosses the 4 GB mark in VSZ (4791 MB). You could argue that it is an indication that 64-bit containers are required, but you'd be wrong:
Qt Creator does not have any container requiring 64-bit sizes, it simply needs 64-bit pointers
It is not using 4 GB of memory. That's just VSZ (mapped memory). The total RAM currently accessible to Creator is merely 348.7 MB.
And it is using more than 4 GB of virtual space because it is a 64-bit application. The cause-and-effect relationship is the opposite of what you'd expect. As a proof of this, I checked how much virtual space is consumed by padding: 800 MB. A 32-bit application would never do that, that's 19.5% of the addressable space on 4 GB.
(padding is virtual space allocated but not backed by anything; it's only there so that something else doesn't get mapped to those pages)
Going into this topic even further with Thiago's responses, see this:
Personally, I'm VERY happy that Qt collection sizes are signed. It seems nuts to me that an integer value potentially used in an expression using subtraction be unsigned (e.g. size_t).
An integer being unsigned doesn't guarantee that an expression involving that integer will never be negative. It only guarantees that the result will be an absolute disaster.
On the other hand, the C and C++ standards define the behaviour of unsigned overflows and underflows.
Signed integers do not overflow or underflow. I mean, they do because the types and CPU registers have a limited number of bits, but the standards say they don't. That means the compiler will always optimise assuming you don't over- or underflow them.
Example:
for (int i = 1; i >= 1; ++i)
This is optimised to an infinite loop because signed integers do not overflow. If you change it to unsigned, then the compiler knows that it might overflow and come back to zero.
Some people didn't like that: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30475
這篇關于為什么 QVector::size 返回 int?的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!