When using Unpacker as an iterator, after each yield, the internal
buffer (_fb_buffer) was compacted by reallocation (done by _fb_consume).
When dealing with a lot of small objects, this is very ineffecient.
Thus in commit 7eb371f827 the pure python
fallback only reallocated the complete buffer when the iteration stops.
When halfway there happens to be data missing in the buffer, we rollback
the buffer to the state before this failed call, and raise an OutOfData.
This rollback, done by _fb_rollback, did not consider the possibility
that the buffer was *not* reallocated.  This commit corrects that.
This commit is contained in:
Bas Westerbaan 2015-01-26 20:34:31 +01:00
parent 83404945c0
commit a71a24d86a

View file

@ -195,6 +195,9 @@ class Unpacker(object):
# the buffer is not "consumed" completely, for efficiency sake.
# Instead, it is done sloppily. To make sure we raise BufferFull at
# the correct moments, we have to keep track of how sloppy we were.
# Furthermore, when the buffer is incomplete (that is: in the case
# we raise an OutOfData) we need to rollback the buffer to the correct
# state, which _fb_slopiness records.
self._fb_sloppiness = 0
self._max_buffer_size = max_buffer_size or 2**31-1
if read_size > self._max_buffer_size:
@ -283,7 +286,7 @@ class Unpacker(object):
def _fb_rollback(self):
self._fb_buf_i = 0
self._fb_buf_o = 0
self._fb_buf_o = self._fb_sloppiness
def _fb_get_extradata(self):
bufs = self._fb_buffers[self._fb_buf_i:]