Compare commits

...

771 commits
0.1.12 ... main

Author SHA1 Message Date
Inada Naoki
f9806368ae
update cython and cibuildwheel (#658) 2025-12-01 14:16:03 +09:00
Inada Naoki
c1ecd23dbf
drop Python 3.9 (#656) 2025-10-09 18:33:23 +09:00
Inada Naoki
af45640970
cython: freethreading_compatible (#654)
```
$ v3/bin/python -VV
Python 3.14.0 free-threading build (main, Oct  7 2025, 15:35:12) [Clang 20.1.4 ]

$ v3/bin/python -c 'import sys,msgpack; print(sys._is_gil_enabled())'
False
```
2025-10-09 15:53:08 +09:00
Inada Naoki
c2546eabc4
update setuptools requirements to >=78.1.1 (#653)
https://github.com/advisories/GHSA-5rjg-fvgr-3xxf
2025-10-09 15:28:01 +09:00
Inada Naoki
ef4f83df16
relax setuptools version (#652) 2025-10-09 13:00:46 +09:00
Inada Naoki
19b5d33ded
release v1.1.2 (#649) 2025-10-08 17:56:20 +09:00
TW
0f3c4be465
README: fix typos and grammar (#648) 2025-10-08 16:10:46 +09:00
MS-GITS
c2a9f1fda5
ci: add support for building windows on arm wheels (#643) 2025-09-26 13:17:17 +09:00
Inada Naoki
d9873dab04
ci: update cibuildwheel and drop Python 3.8 (#642) 2025-07-26 10:59:05 +09:00
Inada Naoki
42f056f3cf v1.1.1 2025-06-13 15:41:08 +09:00
Inada Naoki
e6445d3b92 v1.1.1rc1 2025-06-06 09:56:15 +09:00
Inada Naoki
fe9e620a60
upload to PyPI on create a release (#639) 2025-06-02 14:46:53 +09:00
Inada Naoki
cdc7644503
update cibuildwheel to v2.23.3 (#638) 2025-06-01 16:56:44 +09:00
Inada Naoki
868aa2cd83
update Cython to 3.1.1 (#637) 2025-05-31 12:45:06 +09:00
Edgar Ramírez Mondragón
0eeabfb453
Add Python 3.13 trove classifier (#626) 2024-10-08 18:04:56 +09:00
Inada Naoki
4587393b1a
release v1.1.0 (#622) 2024-09-10 01:58:00 +09:00
Thomas A Caswell
20a2b8eaa2
use PyLong_* instead of PyInt_* (#620)
9af421163cb8081414be347038dee7a82b29e8dd in Cython removed back-compatibility `#define`.
2024-08-21 14:56:00 +09:00
Inada Naoki
9d0c7f2f9c
Release v1.1.0rc2 (#619) 2024-08-19 20:36:26 +09:00
Inada Naoki
9e26d80ab2
update cibuildwheel to 2.20.0 (#618) 2024-08-19 17:56:01 +09:00
Inada Naoki
6e11368f5d
update Cython to 3.0.11 (#617) 2024-08-19 17:35:16 +09:00
Inada Naoki
0b1c47b06b
do not install cython as build dependency (#610)
User can not cythonize during `pip install msgpack`.
So remove cython from build dependency.

If user need to use another Cython, user should download sdist, unzip,
manually cythonize, and `pip install .`.
2024-05-07 22:01:54 +09:00
Inada Naoki
9cea8b6da2
Release v1.1.0rc1 (#609) 2024-05-07 20:49:23 +09:00
Inada Naoki
33e0e86f4e
Cleanup code and pyproject (#608)
* use isort
* fallback: use BytesIO instead of StringIO. We had dropped Python 2
already.
2024-05-06 11:46:31 +09:00
Inada Naoki
e0f0e145f1
better error checks (#607)
* check buffer exports
* add error messages
2024-05-06 03:33:48 +09:00
Inada Naoki
e1068087e0
cython: better exception handling (#606)
- use `except -1` instead of manual error handling
- use `PyUnicode_AsUTF8AndSize()`
- use `_pack()` and `_pack_inner()` instead of `while True:`
2024-05-06 02:13:12 +09:00
Inada Naoki
3da5818a3a
update readme (#605) 2024-05-06 02:12:46 +09:00
Inada Naoki
72e65feb0e
packer: add buf_size option (#604)
And change the default buffer size to 256KiB.

Signed-off-by: Rodrigo Tobar <rtobar@icrar.org>
Co-authored-by: Rodrigo Tobar <rtobar@icrar.org>
2024-05-06 00:49:12 +09:00
Inada Naoki
bf2413f915 ignore msgpack/*.c 2024-05-06 00:30:07 +09:00
Inada Naoki
a97b31437d
Remove unused code (#603) 2024-05-06 00:13:59 +09:00
Inada Naoki
52f8bc2e55
implement buffer protocol (#602)
Fix #479
2024-05-05 23:14:27 +09:00
Inada Naoki
526ec9c923
update cibuildwheel to 2.17 (#601) 2024-05-04 16:49:22 +09:00
Inada Naoki
b389ccf2f7
update README (#561) 2024-05-04 16:10:37 +09:00
Inada Naoki
3e9a2a7419
Stop using c++ (#600)
Python 3.13a6+ & C++ & Cython cause compile error on some compilers.
2024-05-04 16:01:48 +09:00
Inada Naoki
0602baf3ea
update Cython and setuptools (#599) 2024-05-03 18:20:09 +09:00
Inada Naoki
2eca765533
use ruff instead of black (#598) 2024-05-03 15:17:54 +09:00
hakan akyürek
e77672200b
Avoid using floating points during timestamp-datetime conversions (#591) 2024-04-20 07:46:30 +09:00
Inada Naoki
9aedf8ed7f
Release v1.0.8 (#583) 2024-03-01 20:35:28 +09:00
Inada Naoki
bf7bf88ad0
ci: update workflows (#582) 2024-03-01 20:09:55 +09:00
Inada Naoki
039022cecb
update Cython (#581) 2024-03-01 19:24:06 +09:00
Inada Naoki
140864249f
exclude C/Cython files from wheel (#577) 2023-12-20 20:46:04 +09:00
Inada Naoki
c78026102c
doc: use sphinx-rtd-theme (#575) 2023-11-15 23:34:32 +09:00
Inada Naoki
2982e9ff72
release v1.0.7 (#569) 2023-09-28 17:31:52 +09:00
Inada Naoki
acd0684392
do not fallback on build error (#568) 2023-09-28 15:25:10 +09:00
Inada Naoki
ecf03748c7
remove inline macro for msvc (#567) 2023-09-28 15:03:16 +09:00
Inada Naoki
b1b0edaeed
release v1.0.6 (#564) 2023-09-21 14:58:37 +09:00
Inada Naoki
e1d3d5d5c3
update actions (#563) 2023-09-15 12:02:06 +09:00
Inada Naoki
4e10c10aaa
prepare for 1.0.6rc1 (#557) 2023-09-13 18:40:04 +09:00
TW
41d6239c0a
fix .readthedocs.yaml, fixes #559 (#560) 2023-09-13 02:51:12 +09:00
TW
ef15f4a62c
add a basic .readthedocs.yaml file (#558) 2023-09-07 21:25:07 +09:00
TW
423c6df265
move project metadata to pyproject.toml (#555)
also: replace flake8 by ruff.
2023-09-05 10:51:04 +09:00
TW
7b75b4f368
sphinx-related work (#554)
fixes #510
2023-08-31 12:56:24 +09:00
Inada Naoki
715126c67b
CI: update cibuildwheel to v2.15.0 (#551) 2023-08-09 18:20:05 +09:00
Inada Naoki
7cfced5150 start v1.0.6 development 2023-08-09 18:09:42 +09:00
Inada Naoki
427736bbcc
try Cython 3.0 (#548) 2023-07-21 11:11:04 +09:00
Inada Naoki
e5249f877c ci: add Python 3.12 and drop 3.7 2023-07-21 02:53:58 +09:00
Evgeny Markov
c8d0751fe3
Drop Python 3.6 support (#543)
The following steps have been taken:

1. Black was updated to latest version. The code has been formatted with
the new version.
2. The pyupgrade utility is installed. This helped to remove all the
code that was needed to support Python < 3.7.

Fix #541.

Co-authored-by: Inada Naoki <songofacandy@gmail.com>
2023-05-24 01:41:08 +09:00
sblondon
feec06206c
Drop python2 support (#519)
The PR removes python2 references and cases.

Close #518

Co-authored-by: Inada Naoki <songofacandy@gmail.com>
2023-05-21 16:26:39 +09:00
Laerte Pereira
45f848695c
fix: build status badge (#538) 2023-04-08 14:18:25 +09:00
Inada Naoki
802cbc9495 Add security policy 2023-04-01 00:02:25 +09:00
Inada Naoki
0516c2c2a9 Action: Update test workflow 2023-03-09 01:22:38 +09:00
Inada Naoki
35b2d246cf Action: Update wheel workflow 2023-03-09 00:47:52 +09:00
Inada Naoki
4c55f809fe
Release v1.0.5 (#534) 2023-03-09 00:43:28 +09:00
Inada Naoki
aa9ce3e2bb Action: Run publish on tag creation. 2023-03-08 22:19:17 +09:00
Anthon van der Neut
dcb775031c
minor type in exception message (#533)
interger -> integer
2023-03-05 23:45:38 +09:00
Inada Naoki
e3ef909c47 Action: Use setup-python@v4 2023-01-18 13:07:24 +00:00
Inada Naoki
1008229553
Release v1.0.5rc1 (#528) 2023-01-18 19:47:15 +09:00
Inada Naoki
b82d0b62f1
fallback: Fix packing multidim memoryview (#527)
Fix #526
2023-01-18 19:13:44 +09:00
Inada Naoki
c3995669f1 Remove unused code 2023-01-18 08:08:58 +00:00
Matthieu Darbois
44a8060383
Add python 3.11 wheels (#517) 2022-09-09 16:16:12 +09:00
Inada Naoki
edca770071
Fix build error caused by ntohs, ntohl (#514) 2022-08-08 15:08:40 +09:00
Jakub Kulík
9d45926a59
Usef __BYTE_ORDER__ instead of __BYTE_ORDER (#513)
__BYTE_ORDER__ is common predefined macro available on at least gcc and clang.
__BYTE_ORDER is macro defined in platform specific headers.
2022-08-02 13:19:56 +09:00
Inada Naoki
b5acfd5383
Release v1.0.4 (#509) 2022-06-03 13:46:51 +09:00
Inada Naoki
caadbf2df5 Use Actions to publish to PyPI 2022-05-25 12:10:47 +09:00
Inada Naoki
a34dc945bf 1.0.4rc1 2022-05-25 10:00:57 +09:00
Inada Naoki
63837a44d8
ci: Update action versions. (#507) 2022-05-24 20:13:07 +09:00
Inada Naoki
500a238028
Fix Unpacker max_buffer_length handling (#506) 2022-05-24 19:46:51 +09:00
Inada Naoki
b75e3412fb Fix pip upgrade 2022-05-23 05:01:08 +00:00
Inada Naoki
b901b179d1 Update Cython to 0.29.30 2022-05-23 04:52:09 +00:00
Shantanu
6a721faa77
Upgrade black (#505) 2022-05-02 17:26:53 +09:00
Victor Stinner
849c806381
Use PyFloat_Pack8() on Python 3.11a7 (#499)
Python 3.11a7 adds public functions:

* PyFloat_Pack4(), PyFloat_Pack8()
* PyFloat_Unpack4(), PyFloat_Unpack8()

https://bugs.python.org/issue46906
2022-03-14 11:23:11 +09:00
Inada Naoki
cb50b2081b
Update setuptools and black (#498)
* Use setuptools
* Use black==22.1.0
2022-03-03 12:29:55 +09:00
Inada Naoki
89ea57747e
Don't define __*_ENDIAN__ macro on Unix. (#495) 2022-01-19 14:42:28 +09:00
Inada Naoki
bdf0511e29
Refactor CI (#492)
* Use cibuildwheel to build wheels.
* Use matrix
2021-11-25 14:43:55 +09:00
Inada Naoki
6129789e9f
Release v1.0.3 (#491) 2021-11-24 16:18:17 +09:00
Inada Naoki
e29b423de7 black 2021-11-17 11:03:06 +09:00
Inada Naoki
724e6200fd 1.0.3rc1 2021-11-16 17:52:01 +09:00
Benjamin Egelund-Müller
e464cb44fa
Nicer error when packing a datetime without tzinfo (#466) 2021-11-16 17:49:47 +09:00
Inada Naoki
cfa05d3fdc
Actions: Run CI only for PRs from forks. (#489) 2021-11-16 17:47:16 +09:00
Inada Naoki
8e358617e7
mac: Provide Universal2 wheel (#488)
* mac: Use cibuildwheel
* Do not build wheel for PyPy.
2021-11-16 17:42:42 +09:00
Inada Naoki
b3f7254192
Support Python 3.10 and Drop Python 3.5 (#487)
* linux: Use manylinux2014
* mac: Drop Python 3.6 too
2021-11-16 16:19:47 +09:00
Inada Naoki
9b84e490e7 Fix black formatting 2021-11-16 14:53:08 +09:00
Paul Melis
09187421eb
Improve exception message relating to strict_map_key (#485) 2021-11-16 14:47:40 +09:00
Vladimir Matveev
38dba9634e
cimport uint64_t instead of using ctypedef (#473) 2021-03-19 06:35:54 +09:00
Andrey Bienkowski
010de11bed
Make pure-python wheels and eggs possible (#467) 2021-02-27 10:50:24 +09:00
Alexander Shadchin
44fd577705
Remove unused PyObject_AsReadBuffer definition (#468)
Also "old" buffer API was removed in Python 3.10
2021-02-27 09:30:46 +09:00
Andrey Bienkowski
4ace82f108
Fix tox.ini (#465)
There is no such thing as [variants] in the tox syntax. This resulted
in MSGPACK_PUREPYTHON being unset in the "pure" test environments
2021-02-26 21:08:06 +09:00
Andrey Bienkowski
38357b928a
Fix error formatting (#463) 2021-02-26 11:39:36 +09:00
Andrey Bienkowski
4b0819dca9
Remove redundant code (#460) 2021-02-16 22:38:06 +09:00
Inada Naoki
1e728a2e0b
fix docstring (#459) 2021-02-12 16:20:14 +09:00
laike9m
cfae52437b
Updated readme about Python 2 support (#456) 2021-01-28 08:33:14 +09:00
Alex Willmer
02e1f7623c
build: Create tox environments using a known Cython version (#408)
This change causes Tox to run the project's setup.py in a virtualenv
(default path is .tox/.package). The required version of Cython is
installed, rather than whatever version is installed system wide.
2021-01-27 10:11:32 +09:00
Guy Tuval
3b71818bb0
Refactor fallback read header (#441) 2021-01-02 15:39:37 +09:00
Inada Naoki
431ef45c8e Use manylinux1 instead of manylinux2010 2020-12-18 17:43:37 +09:00
Inada Naoki
c0516c603f v1.0.2 2020-12-18 16:43:04 +09:00
Inada Naoki
f34fca7fb5 Update readme 2020-12-18 16:21:41 +09:00
Inada Naoki
051f9ded1f format markdown 2020-12-18 16:13:35 +09:00
Inada Naoki
94336cf914
Fix some travis builds. (#453) 2020-12-18 16:03:05 +09:00
Inada Naoki
753b3706d8
Fix overflow in unpacking timestamp to datetime (#452) 2020-12-18 14:21:27 +09:00
Inada Naoki
8029f95516 Add Python 3.9 wheels 2020-12-11 19:33:20 +09:00
Inada Naoki
edd5603661 Actions: Add Python 3.9 2020-12-11 19:31:24 +09:00
Inada Naoki
d893697eab v1.0.1 2020-12-11 19:16:14 +09:00
Tsahi Zidenberg
7d6b4dfb51
Build arm64 wheels (#439) 2020-12-11 14:30:49 +09:00
Inada Naoki
2df517999b Travis: Reduce build
Save credits.
2020-12-11 13:39:24 +09:00
Inada Naoki
44bc2bd439 Update docstring 2020-12-04 17:52:24 +09:00
Peter Fischer
8fb709f2e0
Fix datetime before epoch on windows in cython implementation (#436)
Cython implementation still used datetime.from_timestamp method, which does not work on windows.
Update the cython implementation to use utc time and delta and add a regression test to highlight the issue.
2020-07-30 23:48:51 +09:00
Peter Fischer
772c830841
Synchronize handling of datetime in Packer implementations (#434)
The handling of datetime is different in the cython and Python implementations. In contrast to the docs, timezone is not required in the Python implementation.
2020-07-24 16:29:15 +09:00
Tom Pohl
5614dd5a89
Allow for timestamps before UNIX epoch (#433) 2020-07-23 17:53:55 +09:00
Markus Gerstel
d9ead81021
Fix a typo in the changelog (#429) 2020-06-26 18:15:46 +09:00
Contextualist
3508ca524e
Fix benchmark extension module import (#428) 2020-06-22 11:27:52 +09:00
jfolz
c1b1a23f62
Fix Unpacker.tell() (#427)
Fixes #426.

Co-authored-by: folz <joachim.folz@dfki.de>
2020-06-08 12:14:50 +09:00
Inada Naoki
b04690012d
Update doc version
Fixes #425
2020-05-24 02:15:04 +09:00
Kevin Tewouda
4e10222b51
Fix an example in README.md (#423) 2020-05-13 13:41:15 +09:00
Dan Salmon
692e0ee8ff
Fix typo (#416) 2020-03-18 09:29:51 +09:00
Charles-Axel Dein
2bfc2d0566
Upgrade msgpack if already installed (#414) 2020-02-24 17:51:56 +09:00
Inada Naoki
2849f5582a
Build linux and macOS wheels on GitHub Actions. (#409) 2020-02-19 00:53:00 +09:00
Inada Naoki
12506d8d91 update README 2020-02-17 17:12:47 +09:00
Inada Naoki
fa7d7447fc 1.0.0 2020-02-17 17:07:18 +09:00
Inada Naoki
64f59884a1 Add note 2020-02-17 16:58:25 +09:00
Alex Willmer
fcb19a0e1a
Clean msgpack/_cmsgpack.cpp and msgpack/_cmsgpack.*.so (#407) 2020-02-14 15:51:19 +09:00
Alex Willmer
cd6561db52
build: Don't test C extension on CPython 2.7 under Tox (#406)
As the Changelog notes, release 1.0 will drop support for the native
extension on CPython 2.x. So there seems little benefit of testing it.
2020-02-14 15:45:17 +09:00
Erik Cederstrand
f0952f1dd6
travis: Python 3.9 is the new dev version. (#405) 2020-02-14 12:31:09 +09:00
Inada Naoki
9d79351e99
Add some test for timestamp (#403) 2020-02-06 22:11:04 +09:00
Inada Naoki
ff1f5f89d9 README: ` -> 2020-02-06 21:06:04 +09:00
Inada Naoki
0dad821169 Fix markdown 2020-02-06 20:35:41 +09:00
Inada Naoki
24950990f4 Remove broken example 2020-02-06 20:29:33 +09:00
Emilio Tagua
1bd6fc36d0
Update README.md (#402) 2020-02-05 21:20:17 +09:00
ossdev07
030bb2f1f7 travis: Add test for arm64 (#399)
Signed-off-by: ossdev07 <ossdev@puresoftware.com>
2019-12-31 10:12:21 +09:00
Inada Naoki
ebfe55e637 travis: Use build config validation. 2019-12-16 15:14:34 +09:00
Inada Naoki
42f5ecfd51 Fix some typo 2019-12-13 15:10:32 +09:00
Inada Naoki
5e1fe818e3 Reintroduce __ne__ 2019-12-12 20:05:25 +09:00
Inada Naoki
9e5ec95e02
Make Timestamp hashable (#396)
When overriding __eq__, __hash__ should be overridden too.
2019-12-12 19:59:06 +09:00
Inada Naoki
887d3a7d22
Refine Timestamp APIs (#395) 2019-12-12 19:43:59 +09:00
Inada Naoki
aab29ff277 Remove TRANSITIONAL package support 2019-12-12 18:48:16 +09:00
Inada Naoki
a05fc5e7c5 black 2019-12-12 18:46:55 +09:00
Inada Naoki
3df431cafd Prepare 1.0rc1 2019-12-12 18:25:38 +09:00
Inada Naoki
c60e6c7a6f Update README 2019-12-12 18:09:07 +09:00
Inada Naoki
2186455d15
Support datetime. (#394) 2019-12-11 23:48:16 +09:00
Marty B
5fd6119093 Simplify check for bool type (#362) 2019-12-09 19:29:47 +09:00
Inada Naoki
d10f12db8f typo 2019-12-09 18:12:51 +09:00
Inada Naoki
c356035a57
Unpacker: Change max_buffer_size to 100MiB (#391) 2019-12-09 17:03:12 +09:00
Inada Naoki
5399f8180d
Update README (#393) 2019-12-09 17:02:35 +09:00
Inada Naoki
d8e3cf0563
Make strict_map_key default to True (#392) 2019-12-06 22:23:15 +09:00
Inada Naoki
0fc0eb2f16 Update README 2019-12-06 21:26:28 +09:00
Inada Naoki
5ba496c79a
Move Black from Travis to Github Actions (#390) 2019-12-06 21:23:54 +09:00
Inada Naoki
f6f6f328eb
Fix fallback Unpacker.read() (#388)
Fixes #352.
2019-12-06 21:16:27 +09:00
Inada Naoki
7a8ce0f9ca Remove unused import 2019-12-06 20:34:18 +09:00
Inada Naoki
235c6036ea
travis: Use codecov (#387) 2019-12-06 19:28:23 +09:00
Inada Naoki
7e9905bdfa
Use new msgpack spec by default. (#386) 2019-12-05 21:34:10 +09:00
Inada Naoki
de320488ae
fallback: Remove old buffer protocol support (#384) 2019-12-05 20:47:20 +09:00
Inada Naoki
9f4b2d53b7
Remove deprecated submodule unpack (#385) 2019-12-05 20:47:01 +09:00
Inada Naoki
9ae43709e4
Drop old buffer protocol support (#383) 2019-12-05 20:20:53 +09:00
Inada Naoki
af4eea430e travis: Add Black 2019-12-05 19:05:00 +09:00
Inada Naoki
bc8c86203a blacken all files. 2019-12-05 18:53:49 +09:00
Inada Naoki
10e5e39ff9 blacken test 2019-12-05 18:51:45 +09:00
Inada Naoki
e557e17cbd blacken 2019-12-05 18:50:13 +09:00
Inada Naoki
641406902e
Add Timestamp support (#382) 2019-12-05 18:29:15 +09:00
Inada Naoki
2c6668941f
Intern map keys (#381)
Fixes #372.
2019-12-03 21:18:17 +09:00
Inada Naoki
e419cd8e2d
Remove encoding option from Unpacker. (#380) 2019-12-03 21:13:05 +09:00
Inada Naoki
83ebb63c44
Ressurect unicode_errors of the Packer. (#379) 2019-12-03 20:53:11 +09:00
Inada Naoki
a0480c7602 Update ChangeLog 2019-12-03 18:54:18 +09:00
Inada Naoki
e1ed0044bf
Remove encoding/unicode_errors options from Packer (#378) 2019-12-03 18:54:01 +09:00
Inada Naoki
cc3a8665d6
Use Github Actions for Windows (#377) 2019-12-03 17:46:28 +09:00
Inada Naoki
891f2d8743
Drop Python 2 support from _cmsgpack (#376) 2019-11-28 20:23:34 +09:00
Terence Honles
b458e9a6a2 update for Python 3.8 (#374) 2019-11-23 12:58:55 +09:00
Inada Naoki
997b524f06 0.6.2 2019-09-20 16:37:08 +09:00
Inada Naoki
144f276e88 Update ChangeLog 2019-09-20 16:36:37 +09:00
Inada Naoki
fd3f004863
Add Python 3.8 to travis (#371) 2019-09-19 20:37:19 +09:00
Inada Naoki
c25e2a0984
update Cython to 0.29.13 (#370) 2019-09-19 20:14:33 +09:00
Inada Naoki
3146ebd330
Use Py_SIZE() when it is safe (#369) 2019-09-19 13:20:57 +09:00
Marty B
b98b8cab99 Avoid calling __Pyx_GetModuleGlobalName for ExtType (#363) 2019-09-19 01:15:09 +09:00
Felix Schwarz
05ff11dbcc use relative imports (#357)
Some applications use msgpack to store persistent data and require a specific
msgpack version (e.g. borgbackup). Bundling helps in case there is an
(incompatible) version of msgpack in a system-wide install.
2019-05-12 21:44:32 +09:00
Hugues
737f08a885 Update requirements.txt (#346)
bytearray.pxd is only available starting with Cython 0.29
2019-03-27 22:37:26 +09:00
INADA Naoki
381c2eff5f Fix changelog.
Fixes #343
2019-02-04 12:08:07 +09:00
Inada Naoki
8f513af999 v0.6.1 2019-01-25 21:43:28 +09:00
Inada Naoki
280308e8ce Recommend max_buffer_len instead of max_(str|bin|ext)_len 2019-01-25 21:27:46 +09:00
Inada Naoki
9951b89455 travis: Install new pytest 2019-01-25 21:04:14 +09:00
Inada Naoki
464fe277e1 Remove pytest warnings 2019-01-25 20:52:57 +09:00
Inada Naoki
28b5f46a34
Auto limit configuration (#342) 2019-01-24 18:46:39 +09:00
INADA Naoki
f46523b1af
use _PyFloat APIs to (de)serialize (#340) 2019-01-07 21:10:40 +09:00
Inada Naoki
197e30723a Fix docstring 2018-12-04 20:10:21 +09:00
Inada Naoki
b8bf3c950c Build linux wheel for Python 3.7 2018-12-04 17:18:34 +09:00
Inada Naoki
b1d658e7a0 AppVeyor: Add Python 3.7 and remove 3.6 2018-11-30 19:25:14 +09:00
Inada Naoki
cc7fd5722b 0.6.0 2018-11-30 19:03:44 +09:00
Inada Naoki
bbdfd4d92e cleanup 2018-11-30 16:28:41 +09:00
Inada Naoki
93b5953eae Update tox.ini 2018-11-30 16:05:31 +09:00
Inada Naoki
04cf8fc7f4 Update ChangeLog 2018-11-30 14:04:18 +09:00
INADA Naoki
760e30b77e
unpacker: Add strict_map_key option (#334) 2018-11-30 11:47:15 +09:00
Inada Naoki
8ae6320072 Fix fallback 2018-11-30 11:42:51 +09:00
Inada Naoki
ab789813b8 Fix test 2018-11-30 11:36:15 +09:00
Inada Naoki
e76091a82c Fix test 2018-11-29 22:38:22 +09:00
Inada Naoki
dc1b993079 Implement strict_map_key to fallback unpacker. 2018-11-29 22:35:12 +09:00
Inada Naoki
e9086a34e4 Add strict_map_key option to unpacker 2018-11-29 22:29:38 +09:00
INADA Naoki
3c9c6edbc8 Update README 2018-11-20 15:48:44 +09:00
jkorvin
ab2415eaa0 Unpacker: allow to use buffer with size greater than 2 GB (#332) 2018-11-20 15:24:35 +09:00
INADA Naoki
44254dd35e
Add StackError and FormatError (#331) 2018-11-20 13:12:49 +09:00
INADA Naoki
8b6ce53cce
s/iteritems/items/g (#330) 2018-11-14 21:06:16 +09:00
INADA Naoki
2f808b6e01
Try language_level=3 (#329) 2018-11-14 20:04:22 +09:00
INADA Naoki
d782464c91
Refactor Cython code (#328)
_msgpack -> _cmsgpack
2018-11-14 16:35:37 +09:00
INADA Naoki
2b5f59166b
fallback: Fix warning stacklevel (#327) 2018-11-14 16:34:51 +09:00
INADA Naoki
39f8aa78c7
Remove deprecated write_bytes option (#322) 2018-11-12 02:33:31 +09:00
INADA Naoki
07f0beeabb
Remove deprecated exception classes (#323) 2018-11-12 02:19:01 +09:00
INADA Naoki
1bf62ba6f8
PendingDeprecationWarning -> DeprecationWarning (#321) 2018-11-09 21:39:25 +09:00
INADA Naoki
9e210bfc1a
Add Packer.buffer() (#320) 2018-11-09 20:55:13 +09:00
Inada Naoki
a8b3e97fe5 Update changelog 2018-11-08 22:25:05 +09:00
INADA Naoki
3b80233592
unpacker: Make default size limit smaller (#319)
To avoid DoS attack, make default size limit smaller.

Fixes #295
2018-11-08 22:21:44 +09:00
Inada Naoki
ae90b26c30 Update ChangeLog 2018-11-08 22:21:05 +09:00
INADA Naoki
08e65bdd03
Merge extension modules (#314)
There were `_packer.so` and `_unpacker.so`.
But single module is simpler than double module.

Merge extension module into single `_msgpack.so`.
2018-11-08 21:39:18 +09:00
INADA Naoki
9d11249d89 Update docker/runtests 2018-11-08 20:31:07 +09:00
INADA Naoki
6c8e539eec Update travis config 2018-11-08 20:31:06 +09:00
INADA Naoki
f6f9597249 Merge extension module
There were `_packer.so` and `_unpacker.so`.
But single module is simpler than double module.

Merge extension module into single `_msgpack.so`.
2018-11-08 20:27:35 +09:00
Inada Naoki
91ec9e1daf Update travis.yml 2018-11-07 23:04:45 +09:00
Marat Sharafutdinov
b077a21f89 Fix stream unpacking example in README (#317) 2018-11-05 01:14:11 +09:00
INADA Naoki
205f7d39b2
Start 0.6 development 2018-10-03 21:06:20 +09:00
Raymond E Ferguson
70b5f21b34 Alternate fixes for jython and legacy CPython (#310)
Python 3.4 is not supported officially.
But keep running test for a while, to know when msgpack-python
stop working on Python 3.4 actually.

The current patches did not work under jython-2.7.1 where implicit
casting of buffer or memoryview doesn't work. It may also be the
jython is a little pickier about string casting non string bytes
due to the underlying strong typing of java.

See issues #303 & #304.
2018-10-02 20:20:06 +09:00
INADA Naoki
d1060de293
travis: Run test on Python 3.4 (#307)
Python 3.4 is not supported officially.
But keep running test for a while, to know when msgpack-python
stop working on Python 3.4 actually.
2018-07-13 19:54:44 +09:00
INADA Naoki
aa41e2fef7
fallback: Fix error on Jython (#304)
Jython doesn't support memoryview += bytes

Fixes #303
2018-07-06 12:40:33 +09:00
Inada Naoki
5f684aed82 fallback: Fix error on Jython
Fixes #303
2018-06-27 01:27:31 +09:00
Alex Gaynor
b10cf78f54 Fix TypeError in fallback.unpack() on <Python 2.7.6 2018-04-16 12:18:35 +09:00
INADA Naoki
984116bd18 Update setup() 2018-04-13 23:41:01 +09:00
INADA Naoki
d4675bee6c 0.5.6 2018-02-23 15:45:34 +09:00
INADA Naoki
ae3a6ba0b0
Deprecate implementation module's unpack() (#290) 2018-02-23 15:41:21 +09:00
INADA Naoki
f38c1a3674
Fix Unpacker.feed() drops unused data in buffer. (#289)
Fixes #287
2018-02-23 11:52:48 +09:00
INADA Naoki
fbaa1360be Fix #285 again 2018-02-23 11:35:09 +09:00
INADA Naoki
3ca8eff31d
Revert "Move unpack() from each implementation to __init__." (#288)
This reverts commit da902f9c1d.
2018-02-23 11:33:26 +09:00
INADA Naoki
9455fccc52 Revert "Move unpack() from each implementation to __init__. (#286)"
This reverts commit da902f9c1d.
2018-02-23 11:32:26 +09:00
INADA Naoki
9bf38105f7 Revert "0.5.5"
This reverts commit 02c881c7cb.
2018-02-23 11:32:26 +09:00
INADA Naoki
02c881c7cb 0.5.5 2018-02-22 17:56:57 +09:00
INADA Naoki
da902f9c1d
Move unpack() from each implementation to __init__. (#286)
Fixes #285
2018-02-22 00:55:32 +09:00
INADA Naoki
ae8d469482
Fix memory leak in pure Python Unpacker.feed() (#284)
fixes #283
2018-02-16 16:35:22 +09:00
INADA Naoki
4b72b61773 Add Makefile target for updating docker image 2018-02-05 15:08:19 +09:00
INADA Naoki
2644cbdcb7
Use cython's cast for converting encoding and errors (#279)
It is little faster on Python 3 because we can skip temporary bytes object
2018-02-05 11:44:17 +09:00
INADA Naoki
351023946f 0.5.4 2018-02-05 02:25:12 +09:00
INADA Naoki
9fdb83719d
Undeprecate unicode_errors (#278) 2018-02-05 02:19:48 +09:00
INADA Naoki
618b2cb027 0.5.3 2018-02-03 10:54:21 +09:00
Andrew Rabert
a0ba076c35 Fix encoding and unicode_errors (#277)
Previously, unicode_errors was either set to NULL or to
the result of PyBytes_AsString. This restores that behavior while also
keeping the existing NULL default behavior.
Original defaults were restored to keep API compatibility until these
deprecated options are finally removed.
2018-02-03 10:34:42 +09:00
INADA Naoki
52fb85a2c5 0.5.2 2018-02-02 19:43:42 +09:00
INADA Naoki
5569a4efcd
s/raw_as_bytes/raw/g (#276)
fixes #273
2018-01-12 19:22:36 +09:00
INADA Naoki
d9ec8fc905
Packer.pack() reset buffer on exception (#274)
fixes #210
2018-01-11 23:50:41 +09:00
INADA Naoki
60ef3879d7
packer: Use PyUnicode_AsUTF8AndSize() for utf-8 (#272) 2018-01-11 19:41:05 +09:00
INADA Naoki
5534d0c7af
Add raw_as_bytes option to Unpacker. (#265) 2018-01-11 17:02:41 +09:00
INADA Naoki
50ea49c86f Update doc 2018-01-10 03:04:54 +09:00
INADA Naoki
fc09da997c fallback: Update docstring. 2018-01-10 02:58:55 +09:00
INADA Naoki
0112957bcf
Remove FutureWarning about use_bin_type option (#271) 2018-01-10 02:54:59 +09:00
INADA Naoki
e0f2fd3af3 Fix README 2018-01-10 02:49:50 +09:00
INADA Naoki
5be9378640 Make msgpack-python deprecated clone of msgpack. 2018-01-10 02:48:08 +09:00
INADA Naoki
ab66c272b0 Update README 2018-01-09 22:03:06 +09:00
INADA Naoki
e0934355c6 Update Makefile 2018-01-09 20:48:45 +09:00
INADA Naoki
676bbcd0ee manylinux1: Add 3.6 and remove 3.4 2018-01-09 19:00:42 +09:00
INADA Naoki
45c1a53d5a
Update AppVeyor build (#267) 2018-01-09 17:58:32 +09:00
INADA Naoki
7c22d983f4 Update README 2018-01-09 13:17:47 +09:00
INADA Naoki
dbb827815a Update Cython version 2018-01-07 02:04:49 +09:00
INADA Naoki
35fc297970 Update README 2018-01-07 02:01:20 +09:00
INADA Naoki
9f4c12f29c Add transition package 2018-01-07 01:59:14 +09:00
INADA Naoki
d720c42468 prepare 0.5 2018-01-07 01:58:01 +09:00
INADA Naoki
89e4f8b7b3 Rename package name to msgpack 2018-01-07 01:57:47 +09:00
INADA Naoki
d0d3a40389
Warn about future use_bin_type change (#264) 2018-01-06 02:07:39 +09:00
INADA Naoki
1979722ba2
Raise MemoryError when failed to grow buffer (#263) 2018-01-05 20:58:14 +09:00
INADA Naoki
43137d6bd2
Deprecate write_bytes option in Unpacker. (#262)
Fixes #197
2018-01-05 20:19:04 +09:00
INADA Naoki
0e2021d3a3 Update changelog 2018-01-05 19:16:14 +09:00
aaron jheng
2eb6e75db1 add license info to metadata (#260) 2017-12-31 11:52:50 +09:00
INADA Naoki
99341035f2
fix zero length raw can't be decoded. (#236)
fix #234
2017-12-21 20:46:14 +09:00
Hugo
3a098851be Remove code and tests for unsupported Python 3.3 and 3.4 (#249) 2017-11-02 18:06:15 +09:00
Martin Braun
1985eb7618 Clarify README, fix grammar, update section on byte arrays (#253) 2017-10-17 12:30:55 +09:00
Hugo
0fc4ee98be Remove code and tests for unsupported Python 2.6 (#250) 2017-10-12 16:27:39 +09:00
INADA Naoki
a70ce0c3d7 Fix travis fail (#251) 2017-10-12 16:26:58 +09:00
INADA Naoki
3d7ebc47b4 travis: Remove "only: master" restriction 2017-10-12 15:29:22 +09:00
INADA Naoki
6fd1890be4 Add py36 to tox.ini 2017-10-12 15:29:22 +09:00
Hugo
54aa47b2dd Update supported versions in classifiers (#248) 2017-10-12 09:26:34 +09:00
Hugo
b57106c246 Update badges (#247) 2017-10-12 02:49:02 +09:00
Lorenzo Bolla
deeda31a88 Add unittests to document serialisation of tuples (#246)
Also, fix formatting of error message in case of tuple.
See https://github.com/msgpack/msgpack-python/issues/245
2017-09-30 16:23:55 +09:00
jfolz
f0f2c0b397 Packer accepts bytearray objects (#229) 2017-05-18 20:03:15 +09:00
jfolz
a8d9162ca6 Unpacker: add tell() (#227) 2017-04-30 02:33:20 +09:00
INADA Naoki
3388e4a6ee travis and appveyor update (#217)
travis:

* stop using tox
* Add Python 3.6 and 3.7-dev
* Stop pypy3 (until PyPy3.5 is released)

appveyor:

* Drop Python 3.4 and add 3.6
2017-01-13 21:46:31 +09:00
INADA Naoki
b328f3ecff Add badge for Read the Docs 2017-01-13 20:48:48 +09:00
INADA Naoki
12845692b5 Add requirements.txt for Read the Docs 2017-01-13 20:41:33 +09:00
INADA Naoki
f985ee8a66 Remove version and date from README 2017-01-13 19:57:04 +09:00
INADA Naoki
2481c64cf1 Merge branch 'release-0.4' 2017-01-12 18:17:00 +09:00
TW
e3fea94509 fix typos and other cosmetic issues (#214)
cosmetic issues:
- reST headlines' underline length needs to match the headline length
  (looks like somebody is / was using a proportional font)
- Cython code lines do not need to be terminated with a semicolon
- always use triple-double-quotes for docstrings
2017-01-11 12:04:23 +09:00
INADA Naoki
1cc3c574a2 Merge branch 'release-0.4' 2016-07-30 11:38:00 +09:00
INADA Naoki
a9f4dad4dc Make manylinux1 wheels 2016-07-30 11:35:26 +09:00
INADA Naoki
ff208ad7d0 0.4.8 2016-07-29 22:25:05 +09:00
INADA Naoki
83e7b0aeac Merge branch 'release-0.4' 2016-07-21 19:33:49 +09:00
INADA Naoki
b911b3c652 Fix ext_hook call (#203)
fixes #202
2016-07-21 19:32:00 +09:00
INADA Naoki
334dbe2a96 Enable Python35-x64 in AppVeyor 2016-07-21 19:19:32 +09:00
INADA Naoki
d6254abc8a Use AppVeyor to build windows wheel (#188)
* Add AppVeyor support to build windows wheel
* Fix test_limits on 32bit environments
* Ignore Python35-x64 test fail for now
  Should be fixed in next version.
2016-07-21 19:18:48 +09:00
INADA Naoki
0ef5f4d691 Merge pull request #195 from jfolz/master
Use new buffer interface to unpack
2016-06-14 02:29:23 +09:00
folz
2b63e9fbbb enable unpacking from memoryview 2016-06-13 15:37:33 +02:00
INADA Naoki
b887c1a4ad Merge pull request #199 from methane/struct-unpack-from
Use struct.unpack_from instead of struct.unpack
2016-05-25 00:19:31 +09:00
INADA Naoki
c16a1c6bdf fallback: Use bytearray as buffer 2016-05-24 07:32:30 +09:00
INADA Naoki
6b8919355d fallback: Use struct.unpack_from when possible 2016-05-24 02:46:29 +09:00
INADA Naoki
b78c0c509c Merge pull request #198 from methane/refactoring-fallback
fallback: refactoring
2016-05-22 15:20:38 +09:00
INADA Naoki
e9c42fa523 fallback: simplify write_bytes callback implementation 2016-05-22 13:31:01 +09:00
INADA Naoki
3322a76989 Remove _fb_ prefix 2016-05-22 11:08:20 +09:00
INADA Naoki
ae8e98e669 Merge pull request #196 from methane/fallback-bytearray-buffer
fallback: Rewrite buffer from array of bytes to bytes
2016-05-22 11:06:02 +09:00
INADA Naoki
f421f59a28 fallback: Rewrite buffer from array of bytes to bytearray 2016-05-20 21:56:10 +09:00
INADA Naoki
318ddfc052 Remove wrong download_url from package metadata 2016-05-13 09:35:02 +09:00
INADA Naoki
c6c4e59f4c s/realloc/PyMem_Realloc/ (#193) 2016-05-08 16:31:52 +09:00
INADA Naoki
a5c8bafad4 Remove unused import (#190) 2016-05-05 02:46:10 +09:00
INADA Naoki
5c052264bc Update ChangeLog 2016-05-05 02:31:03 +09:00
INADA Naoki
63e23d37f9 travis: Use docker to test 32bit environment (#189)
* travis: testing matrix.include feature to use docker
* Add test script for 32bit
* Fix OverflowError in 32bit Environment
2016-05-05 02:07:46 +09:00
INADA Naoki
fc2933853a Pure Python packer supports memoryview of multi byte items. 2016-05-05 00:50:11 +09:00
INADA Naoki
53f47ef55d Remove double underscore prefix 2016-05-05 00:49:48 +09:00
folz
a91d5c538e add lower bound tests for memoryviews 2016-05-04 12:03:37 +02:00
folz
5860af953a refactor header packing for str and bin types 2016-05-04 11:01:27 +02:00
folz
0b55989f0b more descriptive test names 2016-05-04 10:04:09 +02:00
folz
0ec2e3534f fix problems associated with packing memoryviews
fix wrong length when packing multibyte memoryviews in fallback
add tests for memoryviews of different types and sizes and check contents of packed data
2016-05-03 16:55:14 +02:00
INADA Naoki
ceb9635a3f Use AppVeyor to build windows wheel (#188)
* Add AppVeyor support to build windows wheel
* Fix test_limits on 32bit environments
* Ignore Python35-x64 test fail for now
  Should be fixed in next version.
2016-05-03 11:58:28 +09:00
INADA Naoki
6b113a6fb3 Use Python's memory API (#185) 2016-04-30 17:07:14 +09:00
Timothy Cyrus
40ee322440 Update README.rst (#184)
Change PNG Badges to SVG
2016-04-30 00:18:27 +09:00
INADA Naoki
2192310bc4 Use manylinux1 wheel for Cython (#179)
* Use manylinux1 wheel for Cython
* Use newer pip
2016-04-16 02:03:18 +09:00
INADA Naoki
f895517995 Merge pull request #172 from methane/palaviv-msgpack-exceptions
Organize Exceptions
2016-02-14 17:08:13 +09:00
INADA Naoki
b2a8ce6cbd Deprecate more useless exceptions 2016-02-14 14:32:11 +09:00
INADA Naoki
6e36476239 remove too much parameterized tests 2016-02-14 14:29:34 +09:00
INADA Naoki
3dad39811d Deprecate PackExceptions 2016-02-14 14:29:34 +09:00
INADA Naoki
d90008d4f5 ExtraData should be UnpackValueError 2016-02-14 11:46:28 +09:00
palaviv
e15085db03 removed MsgpackBaseException 2016-02-12 15:39:50 +02:00
palaviv
1183eff688 reraising ValueError from unpack.h as UnpackValueError 2016-02-12 15:37:39 +02:00
palaviv
d44063119b changed more ValueErrors to PackValueError 2016-02-12 15:36:48 +02:00
palaviv
7d2d46effc msgpack pack and unpack throws only exception that inherit from MsgpackBaseException. cython and fallback throws same exceptions 2016-02-12 11:00:39 +02:00
INADA Naoki
82b3121507 Merge pull request #161 from jfolz/feature/packbuffers
Support packing memoryview objects
2016-01-26 00:17:30 +09:00
folz
31adc5a3c0 Support packing memoryview objects 2016-01-25 13:25:10 +01:00
INADA Naoki
8036cb4e0e Merge pull request #158 from methane/feature/strict-typecheck
Packer: check type strictly
2016-01-25 11:37:07 +09:00
INADA Naoki
a779b79b47 Add test for strict_types option 2016-01-25 10:19:49 +09:00
INADA Naoki
c8513898e2 Merge pull request #168 from msgpack/feature/drop-2.6
Drop Python 2.6, 3.2 support
2016-01-25 01:20:48 +09:00
INADA Naoki
005739388d Drop Python 2.6, 3.2 support 2016-01-25 01:17:21 +09:00
INADA Naoki
3a8bb070f7 Update ChangeLog 2016-01-25 01:12:56 +09:00
INADA Naoki
1f8240eaf6 0.4.7 2016-01-25 01:10:50 +09:00
INADA Naoki
151a16d216 Merge pull request #165 from frsyuki/fix-string-too-large-message
Fix wrong 'dict is too large' on unicode string
2016-01-12 10:26:11 +09:00
Sadayuki Furuhashi
83424bd7b3 Fix wrong 'dict is too large' on unicode string 2016-01-11 13:57:33 -08:00
INADA Naoki
68d62bf9a1 Merge pull request #160 from thedrow/patch-2
Travis will now cache dependencies despite having a custom install step
2016-01-03 02:27:08 +09:00
INADA Naoki
b6e962d0a6 Merge pull request #163 from ThomasWaldmann/master
fix typos
2015-12-10 17:42:15 +09:00
Thomas Waldmann
9c6584ee10 fix typos 2015-12-09 13:53:42 +01:00
INADA Naoki
88a38dce06 Merge pull request #159 from thedrow/patch-1
Added Python 3.5 to the build matrix
2015-11-23 04:34:39 +09:00
Omer Katz
e4aa43d769 Travis will now cache dependencies despite having a custom install step. 2015-11-17 17:08:04 +02:00
Omer Katz
81177caff7 Run the build with 3.5 since it's still not available by default in travis. 2015-11-17 16:57:25 +02:00
Omer Katz
4d9684db0a Added Python 3.5 to the build matrix. 2015-11-17 15:32:34 +02:00
Omer Katz
6f38bf7dd4 Added python 3.5 to tox.ini. 2015-11-17 15:31:36 +02:00
INADA Naoki
628c519187 strict type check for ext type 2015-11-10 03:41:09 +09:00
INADA Naoki
9b673279d3 strict_types should be last argument 2015-11-10 03:37:54 +09:00
INADA Naoki
1032ef9bf2 fallback unpacker: precise => strict 2015-11-10 03:33:50 +09:00
INADA Naoki
cbdf3c339a s/precise_mode/strict_types/ 2015-11-10 03:30:11 +09:00
INADA Naoki
e9a47cbd35 Merge branch 'master' of https://github.com/faerot/msgpack-python into pramukta-default_function_on_int_overflow 2015-11-10 01:52:52 +09:00
INADA Naoki
29266b024e Update ChangeLog 2015-11-09 02:34:28 +09:00
INADA Naoki
a1317b604f refactor 2015-11-09 02:34:17 +09:00
INADA Naoki
ca87a7e539 Merge pull request #135 from pramukta/default_function_on_int_overflow
Default function on int overflow
2015-11-09 02:23:22 +09:00
INADA Naoki
7d900371c8 Fix compile error 2015-11-09 02:09:39 +09:00
INADA Naoki
b6f7243479 Merge pull request #157 from methane/unpack-params
Add missing params to unpack()
2015-11-09 02:03:17 +09:00
INADA Naoki
f7d3715f2c Add missing params to unpack() 2015-11-09 02:00:48 +09:00
INADA Naoki
e38e49ff93 Merge pull request #156 from methane/refactor
refactor C code
2015-11-09 01:54:24 +09:00
INADA Naoki
de3c2b99f7 refactor C code
fixes #137
2015-11-09 01:52:37 +09:00
INADA Naoki
3cef27b69b Update ChangeLog 2015-11-09 00:54:06 +09:00
INADA Naoki
8aadc5c380 readme: Fix markup 2015-11-09 00:50:07 +09:00
INADA Naoki
e601ef4c23 Remove msgpack 2.0 from README
There are no versio in spec.
2015-11-09 00:43:52 +09:00
INADA Naoki
53fcd9b9df Update gitignore 2015-11-08 19:37:40 +09:00
INADA Naoki
6f208abbc7 Update Windows compiler information 2015-11-08 17:34:52 +09:00
INADA Naoki
02611afd5f Drpo pypip.in badge
It downs so long
2015-11-08 17:29:09 +09:00
INADA Naoki
dbe6572ee5 Merge pull request #155 from methane/fix/152
Decrease refcnt when error happend while unpacking
2015-11-08 12:45:29 +09:00
INADA Naoki
35a69ac9c2 Decrease refcnt when error happend while unpacking
Fixes #152
2015-11-08 12:43:54 +09:00
INADA Naoki
a329850147 Merge pull request #153 from methane/fix/warnings
fix compiler warnings
2015-11-07 16:54:11 +09:00
INADA Naoki
e9ab4d8824 Fix warnings
fixes #146
2015-11-07 16:52:58 +09:00
INADA Naoki
ab359e3330 Update travis setting 2015-11-07 16:50:04 +09:00
INADA Naoki
c102e6cee5 executable setup.py 2015-11-07 14:30:05 +09:00
INADA Naoki
52a38c6e9d remove unused bat file 2015-11-07 14:26:14 +09:00
INADA Naoki
672b220a3f remove unused bat file 2015-11-07 13:17:28 +09:00
INADA Naoki
cd1f158b76 Merge pull request #151 from ThomasWaldmann/patch-1
fix typo in setup.py
2015-10-30 17:13:11 +09:00
TW
c3a3f9b0a5 fix typo in setup.py 2015-10-30 00:36:12 +01:00
INADA Naoki
aa209ab1e9 Merge pull request #143 from emulbreh/pass-ext-hook
Accept ext_hook for unpack()
2015-08-24 03:16:24 +09:00
Johannes Dollinger
4eb4c7a994 Accept ext_hook for unpack() 2015-07-27 20:29:43 +02:00
INADA Naoki
d816aa8040 Merge pull request #136 from tbeu/patch-1
Update README.rst
2015-03-23 12:15:27 +09:00
tbeu
734cb71dac Update README.rst
Fix typo
2015-03-22 21:35:21 +01:00
Pramukta Kumar
6f02d252e1 corresponding change to cython implementation 2015-03-17 15:16:17 -04:00
Pramukta Kumar
10cd2d2ebf calling the default function upon integer overflow in the fallback routine 2015-03-17 15:05:04 -04:00
Pramukta Kumar
2d05b40b03 Test to demonstrate that the default function isn't always called (#133) 2015-03-17 15:02:40 -04:00
INADA Naoki
b7806a6e6e README: Update version 2015-03-13 04:23:04 +09:00
INADA Naoki
b49e53003d Merge pull request #128 from methane/travis/cython-0.22
travis: Use cython 0.22
2015-03-13 04:21:37 +09:00
INADA Naoki
2dda8fc4a5 travis: Build only master 2015-03-13 04:18:10 +09:00
INADA Naoki
b19e336108 travis: Cython 0.22 2015-03-13 04:05:44 +09:00
INADA Naoki
9fe19cc408 0.4.6 2015-03-13 03:51:14 +09:00
INADA Naoki
4576b94b6c fallback: Add some comment to Unpacker members. 2015-01-27 14:04:32 +09:00
INADA Naoki
3f5e058264 Merge pull request #125 from bwesterb/master
Rollback to correct position in the case of OutOfData.

Fixes #124
2015-01-27 14:03:30 +09:00
Bas Westerbaan
c5d621853d test_sequnpack: python3 literals 2015-01-26 20:55:23 +01:00
Bas Westerbaan
a71a24d86a Fix #124
When using Unpacker as an iterator, after each yield, the internal
buffer (_fb_buffer) was compacted by reallocation (done by _fb_consume).
When dealing with a lot of small objects, this is very ineffecient.
Thus in commit 7eb371f827 the pure python
fallback only reallocated the complete buffer when the iteration stops.
When halfway there happens to be data missing in the buffer, we rollback
the buffer to the state before this failed call, and raise an OutOfData.
This rollback, done by _fb_rollback, did not consider the possibility
that the buffer was *not* reallocated.  This commit corrects that.
2015-01-26 20:34:31 +01:00
Bas Westerbaan
83404945c0 Add test for issue #124 2015-01-26 20:31:03 +01:00
INADA Naoki
68b0294465 update README 2015-01-26 00:39:50 +09:00
INADA Naoki
630c046bf2 0.4.5 2015-01-26 00:38:36 +09:00
INADA Naoki
ec5dff113e Merge pull request #105 from msgpack/max-xxx-size
Add max_<type>_len option to unpacker. (fixes #97).
2015-01-25 03:34:25 +09:00
INADA Naoki
2985f4d865 Fix error when use unicode_literal in Python 2 2015-01-25 02:35:57 +09:00
INADA Naoki
75ce78dd15 Add max_<type>_len option to unpacker. (fixes #97).
Fix build error on 32bit environment (fixes #102).
2015-01-25 01:41:21 +09:00
INADA Naoki
c43fb48724 Merge pull request #123 from ktdreyer/pytest-2.3-compat
tests: add pytest 2.3 compatibility
2015-01-24 17:30:21 +09:00
Ken Dreyer
f40fdf523a tests: add pytest 2.3 compatibility
Adjust the skipif conditional to use the older pytest 2.3 syntax.

(This allows the tests to pass with the system pytest package on RHEL
7.0, since RHEL 7.0 ships pytest 2.3.5.)
2015-01-23 12:27:58 -07:00
INADA Naoki
5025b51d3b Merge branch 'master' of github.com:msgpack/msgpack-python 2015-01-14 10:02:55 +09:00
INADA Naoki
d14a7885c7 Fix typo in ChangeLog.
Fixes #121.
Thanks @dmick
2015-01-14 10:02:04 +09:00
INADA Naoki
35947630b7 Merge pull request #120 from msabramo/patch-2
README.rst: Add PyPI badge
2015-01-09 15:43:53 +09:00
Marc Abramowitz
e7f505119d README.rst: Add PyPI badge 2015-01-08 22:05:27 -08:00
INADA Naoki
5abb73ebfa Merge pull request #119 from msabramo/patch-1
README.rst: Add code-blocks
2015-01-09 14:19:27 +09:00
Marc Abramowitz
2b4a815e5a README.rst: Add code-blocks
For syntax highlighting
2015-01-08 20:37:31 -08:00
INADA Naoki
87b493b2d8 Update README 2015-01-09 09:54:30 +09:00
INADA Naoki
deb8094e1d 0.4.4 2015-01-09 09:53:44 +09:00
INADA Naoki
3445e43d72 0.4.4 2015-01-09 09:53:27 +09:00
INADA Naoki
f86a0442ec Merge pull request #117 from msgpack/cython-cache
Use wheelhouse for cython
2015-01-09 09:41:49 +09:00
INADA Naoki
b6055ce47e Fix tox.ini 2015-01-09 09:35:31 +09:00
INADA Naoki
ca9768771d Use wheelhouse for cython 2015-01-09 09:31:26 +09:00
INADA Naoki
ed30acb934 Merge pull request #116 from msgpack/fix-0_4_3-regression
Fix compile error on _packer
2015-01-09 04:30:24 +09:00
INADA Naoki
715fcac6c6 Fix tox 2015-01-09 04:19:34 +09:00
INADA Naoki
ee0e435535 Fix compile error. 2015-01-09 04:10:25 +09:00
INADA Naoki
c25c8d7246 Check extension module was compiled. 2015-01-09 04:07:47 +09:00
INADA Naoki
868d149efc Merge branch 'master' of github.com:msgpack/msgpack-python 2015-01-07 16:09:39 +09:00
INADA Naoki
5bc685973d 0.4.3 2015-01-07 15:59:35 +09:00
INADA Naoki
198196c731 Merge pull request #115 from msgpack/fix-windows
Fix build failuer for Python 2.7 on Windows.
2015-01-07 15:56:58 +09:00
INADA Naoki
9624a2aca3 Fix build failuer for Python 2.7 on Windows.
Remove int8_t usage.
2015-01-07 12:10:42 +09:00
INADA Naoki
593887025e Fix README reST 2015-01-02 12:12:09 +09:00
INADA Naoki
00f193ba08 Merge pull request #113 from msgpack/new-travis
Use new container based Travis-CI
2014-12-18 13:24:08 +09:00
INADA Naoki
0be3e874c6 Use new container based Travis-CI 2014-12-18 13:17:49 +09:00
INADA Naoki
547a668ad6 Merge pull request #110 from JasonXJ/master
add support for pypy3
2014-11-14 16:24:11 +09:00
Xiaojie Lin
d5e9ac9316 add support for pypy3 2014-11-14 14:47:54 +11:00
INADA Naoki
ef3f94101a Merge branch 'master' of github.com:msgpack/msgpack-python 2014-08-31 03:13:09 +09:00
INADA Naoki
6948dd5120 Fix benchmark 2014-08-31 03:03:48 +09:00
INADA Naoki
5d6481dcbb Merge pull request #107 from msgpack/fix-build
Fix build and tests.
2014-08-31 02:36:32 +09:00
INADA Naoki
d6c773dc4d Fix build and tests. 2014-08-31 02:29:05 +09:00
INADA Naoki
5cfa49bb2c Merge pull request #103 from bwesterb/master
Some performance improvements for PyPy
2014-06-16 16:17:24 +09:00
Bas Westerbaan
0532ea87fb fallback: fix BufferFull with sloppy consume 2014-06-15 22:45:30 +02:00
Bas Westerbaan
b334d441c3 fallback: _fb_read: do a big read, when we need a big read 2014-06-14 18:42:02 +02:00
Bas Westerbaan
952eb9fc53 fallback: add some comments to _fb_read 2014-06-14 18:34:17 +02:00
Bas Westerbaan
7eb371f827 fallback: do not reset the buffer completely in between of iterations 2014-06-14 18:30:38 +02:00
Bas Westerbaan
ba8cf1c402 fallback: _fb_consume: improve performance with pypy 2014-06-14 18:26:30 +02:00
Bas Westerbaan
56cf384159 fallback: set default read_size to 4096 2014-06-14 18:25:57 +02:00
Bas Westerbaan
67391fd60e fallback: add missing update of _fb_buf_n 2014-06-14 18:25:34 +02:00
INADA Naoki
4d4a0cc442 Add changelog 2014-05-26 15:21:14 +09:00
INADA Naoki
8f1c0504f1 Travis preinstall Python 3.4 2014-05-26 01:28:30 +09:00
INADA Naoki
7f623c0906 Fix unpacking uint32 on 32bit or LLP64. 2014-05-26 01:17:53 +09:00
faerot
b877ce2afa precise_mode instead of distinguish_tuple
When precise_mode flag is set, serialization will be as precise as
possible - type checks will be exact (type(..) is ... instead of
isinstance(..., ...) and tuple will be treated as undefined type. This
mode is to make accurate object serialization possible.
2014-05-22 16:45:26 +03:00
faerot
3b933f0966 added distinguish_tuple argument to Packer
This will make precise python types serialization possible.
2014-05-22 11:32:54 +03:00
INADA Naoki
61bac2f586 0.4.2 2014-03-26 17:20:51 +09:00
INADA Naoki
803684b90d 0.4.2 2014-03-26 15:52:03 +09:00
INADA Naoki
3a9dc1d7ea Merge pull request #94 from msgpack/strict-input-check
Add tests for limits.
2014-03-26 15:37:29 +09:00
INADA Naoki
a72e75d7c8 Fix PyPy fail. 2014-03-26 15:12:28 +09:00
INADA Naoki
5fb9d8a7fd Fix Python 3.2 2014-03-26 13:32:28 +09:00
INADA Naoki
7d0e145e91 Allow ValueError for packing integer overs format limit. 2014-03-26 12:50:28 +09:00
INADA Naoki
ef5d93d4ea Fix size limit on pack_array_header and pack_map_header. 2014-03-26 11:05:53 +09:00
INADA Naoki
e99331d1ab Fix skipif marking. 2014-03-26 11:01:44 +09:00
INADA Naoki
c60ab28ee7 Add check for format limits. 2014-03-26 03:03:18 +09:00
INADA Naoki
e7f87d9d41 More limit check. 2014-03-26 03:00:47 +09:00
INADA Naoki
6c0c306f96 Add tests for limits. 2014-03-26 02:49:03 +09:00
INADA Naoki
ac4cd06845 Merge pull request #93 from popravich/unpacker_ext_hook_fix
Unpacker's ext_hook fixed + tests
2014-03-25 12:19:51 +09:00
Alexey Popravka
ee38505db5 fixed super() for python2 2014-03-24 15:42:16 +02:00
Alexey Popravka
d850e56dd0 Unpacker's ext_hook fixed + tests 2014-03-24 15:31:06 +02:00
INADA Naoki
e9de6b7f39 travis: Add Python 3.4 to testing. 2014-03-19 10:42:05 +09:00
INADA Naoki
55eab8b4d6 Update README 2014-02-17 10:06:39 +09:00
INADA Naoki
1ca3c27a81 0.4.1 2014-02-17 10:03:36 +09:00
INADA Naoki
c567cf478b Remove unused import. 2014-02-17 09:54:55 +09:00
INADA Naoki
eb3537ab50 Merge branch 'pr/82' 2014-02-17 04:07:16 +09:00
INADA Naoki
518f886b11 fix 2014-02-17 04:06:58 +09:00
INADA Naoki
7effb4aac6 fix 2014-02-17 04:05:04 +09:00
INADA Naoki
ff263dfee8 Merge pull request #91 from msgpack/reduce-six
Reduce six usage
2014-02-17 03:38:06 +09:00
INADA Naoki
0c22e775c9 Remove six completely. 2014-02-15 22:36:52 +09:00
INADA Naoki
63eab502df Remove six.b() 2014-02-15 22:20:57 +09:00
INADA Naoki
dee2d87d41 six.BytesIO => io.BytesIO 2014-02-15 22:16:01 +09:00
INADA Naoki
96d7d0edc6 Merge pull request #89 from msgpack/better-travis
travis: Simplify .travis.yml and add PyPy env.
2014-02-15 22:03:38 +09:00
INADA Naoki
213f7888c3 Remove too strict type check from test. 2014-02-14 09:26:46 +09:00
INADA Naoki
a5368f62e2 travis: Simplify .travis.yml and add PyPy env. 2014-02-14 09:01:00 +09:00
INADA Naoki
400a1030cd Merge pull request #88 from msgpack/fix-67
Fix Unpacker doesn't increment refcnt.
2014-02-13 11:55:04 +09:00
INADA Naoki
6d80569b9b Unpacker: maintain refcnt (fix #67). 2014-02-13 09:58:38 +09:00
INADA Naoki
cf63f19211 Fix test 2014-02-13 09:57:51 +09:00
INADA Naoki
0cab6092e4 Add refcount check. 2014-02-13 09:55:17 +09:00
INADA Naoki
38cf835c95 Rename 2014-02-13 09:40:12 +09:00
Sergey Zhuravlev
11a3b1561a fixed support of python3 2014-02-12 23:09:23 +04:00
INADA Naoki
7b24d0fe5a Merge pull request #87 from msgpack/fix-83
Feed data from file before _unpack()
2014-02-13 03:24:09 +09:00
INADA Naoki
9d61f24387 Feed data from file before _unpack() 2014-02-13 03:10:51 +09:00
INADA Naoki
d2fc801034 Fix warning on 64bit environment. 2014-02-13 01:57:34 +09:00
INADA Naoki
f322ed4e1b Merge pull request #84 from wbolster/exception-type-cleanups
Always raise TypeError for wrong argument types
2014-02-13 01:19:33 +09:00
INADA Naoki
7973cce554 Merge pull request #85 from wbolster/fix-cython-warnings
Cosmetic changes to fix Cython warnings
2014-02-13 00:46:02 +09:00
Wouter Bolsterlee
dd65341e0d Cosmetic changes to fix Cython warnings
Put declarations on separate line to avoid warnings like this:

  Non-trivial type declarators in shared declaration (e.g. mix
  of pointers and values). Each pointer declaration should be
  on its own line.
2014-02-11 21:05:15 +01:00
Wouter Bolsterlee
77046b839d Always raise TypeError for wrong argument types
The code that checks whether hooks are callable() (and some other type
checks) should always raise TypeError on failure. Before this change,
both ValueError and TypeError were used in an inconsistent way (C
extension and Python implementation were not the same).
2014-02-11 20:42:58 +01:00
Sergey Zhuravlev
48ca2d700d Added support of bytearrays to msgpack/fallback.py 2013-12-15 16:22:39 +00:00
INADA Naoki
d5436c2819 Update ChangeLog 2013-10-21 02:33:58 +09:00
INADA Naoki
f31a4403a1 Document update. 2013-10-21 01:47:54 +09:00
INADA Naoki
1d0096b998 0.4.0 2013-10-21 01:20:13 +09:00
INADA Naoki
e802abebf1 Merge pull request #79 from msgpack/newspec
[WIP] Newspec stage 2.
2013-10-20 09:18:50 -07:00
INADA Naoki
d84a403bc0 fix bugs. 2013-10-21 01:12:57 +09:00
INADA Naoki
e3fee4db5f fallback: support packing ExtType 2013-10-21 00:59:22 +09:00
INADA Naoki
37c2ad63af Add tests and bugfix. 2013-10-21 00:29:05 +09:00
INADA Naoki
cb78959678 Update README. 2013-10-21 00:01:47 +09:00
INADA Naoki
84dc99c894 Add ext_type example to README. 2013-10-20 23:27:32 +09:00
INADA Naoki
0d5c58bd51 cleanup 2013-10-20 23:06:02 +09:00
INADA Naoki
822cce823c Support unpacking new types. 2013-10-20 22:59:27 +09:00
INADA Naoki
96bcd76f49 Packing ExtType and some cleanup 2013-10-20 20:28:32 +09:00
INADA Naoki
aa68c9b833 fallback: Support pack_ext_type. 2013-10-20 15:40:20 +09:00
INADA Naoki
ec0691fb2c Merge pull request #77 from msgpack/newspec
[WIP] Support new spec.
2013-10-19 23:11:34 -07:00
INADA Naoki
27f0cba8a5 Merge branch 'master' of https://github.com/antocuni/msgpack-python into newspec
Conflicts:
	msgpack/fallback.py
	msgpack/unpack.h
	msgpack/unpack_define.h
	msgpack/unpack_template.h
2013-10-20 15:08:31 +09:00
INADA Naoki
7123341ca8 code refactoring. 2013-10-20 14:34:36 +09:00
Antonio Cuni
6386481024 add a note in the README 2013-10-19 18:43:16 +02:00
Antonio Cuni
c9b97f0788 implement unpacking of ext 8,16,32 2013-10-19 18:04:30 +02:00
Antonio Cuni
56dd1650a4 implement unpacking for all the fixtext formats 2013-10-19 17:27:16 +02:00
Antonio Cuni
985d4c1496 add a test for unpacking extended types 2013-10-19 11:34:28 +02:00
Antonio Cuni
ff858387d3 implement unpack_one also for the cython version, and add a test for it 2013-10-19 01:49:03 +02:00
Antonio Cuni
a7485eccb2 add the hook for unknown types also to the cython Packer 2013-10-18 17:46:42 +02:00
Antonio Cuni
5467515065 implement Packer.pack_extended_type also in the cython version of the code 2013-10-18 17:33:54 +02:00
Antonio Cuni
afa28fb205 add support to unpack all ext formats 2013-10-18 15:54:12 +02:00
Antonio Cuni
c727440ba5 automatically find the best format to encode extended types 2013-10-18 15:45:50 +02:00
Antonio Cuni
522c4bfc79 slightly change to API 2013-10-18 15:03:58 +02:00
Antonio Cuni
5529dfe596 kill some duplicate code from unpack/unpackb and move the logic to Unpacker.unpack_one. By doing this we no longer need to make the module-level pack/unpack parametric on the class, because they contain no logic at all 2013-10-18 14:38:52 +02:00
INADA Naoki
d9439204c7 Add ext type support to fallback.Unpacker. 2013-10-17 11:29:36 +09:00
INADA Naoki
f162bf6f79 Add tests for str8 and bin types. 2013-10-17 09:37:20 +09:00
INADA Naoki
85eaff344b Add bin type support for fallback Unpacker. 2013-10-17 09:15:19 +09:00
INADA Naoki
84f6b10019 Add bin type support to pure Python packer. 2013-10-17 08:52:59 +09:00
INADA Naoki
171c538113 refactoring. 2013-10-17 08:44:25 +09:00
INADA Naoki
da12e177a3 Add bin type support. 2013-10-17 08:35:08 +09:00
Antonio Cuni
d61097511a add support for extended types: you can now pack/unpack custom python objects by subclassing Packer and Unpacker 2013-10-15 16:59:43 +02:00
INADA Naoki
f45d7b4e2d Merge pull request #72 from lgov/master
* msgpack/exceptions.py: Fix typo in error message.
2013-09-13 05:15:36 -07:00
Lieven Govaerts
12f87147b5 * msgpack/exceptions.py: Fix typo in error message. 2013-09-13 13:47:13 +02:00
INADA Naoki
2f6061cb4f Merge pull request #66 from yamt/fixes
some fixes and tests
2013-06-04 23:31:46 -07:00
YAMAMOTO Takashi
e250b89920 more tests 2013-06-03 13:54:00 +09:00
YAMAMOTO Takashi
d1b9ecbc8e fix long vs long long bugs
these bugs were introduced by "fix long/int confusions in pyx version of
unpack" commit.
2013-06-03 13:53:47 +09:00
YAMAMOTO Takashi
3dbb2d1e7e fix compilation errors 2013-06-03 13:53:43 +09:00
INADA Naoki
d4bb86c0c8 Merge pull request #65 from msgpack/old-buffer
Stop using new style buffer API.
2013-05-18 20:43:27 -07:00
INADA Naoki
956f55ecdf Stop using const_void_ptr typedef.
New Cython supports const natively.
2013-05-19 12:32:33 +09:00
INADA Naoki
bbe86e7a92 Revert "Use new buffer interface."
This reverts commit 085db7f8dc.

Conflicts:
	msgpack/_unpacker.pyx
2013-05-19 12:30:23 +09:00
INADA Naoki
08c56d66f6 Use --cplus for cythoning 2013-05-19 01:13:21 +09:00
YAMAMOTO Takashi
63b9fa5843 fix a compilation error
msgpack/_unpacker.pyx: In function 'PyObject* __pyx_pf_7msgpack_9_unpacker_unpac
kb(PyObject*, PyObject*, PyObject*, PyObject*, int, PyObject*, PyObject*, PyObje
ct*)':
msgpack/_unpacker.pyx:111:70: error: invalid cast from type 'Py_buffer' to type 'char*'
2013-05-16 12:58:00 +09:00
YAMAMOTO Takashi
b0c193f3e0 fix long/int confusions in pyx version of unpack 2013-05-16 12:41:53 +09:00
INADA Naoki
82313b713e Merge pull request #63 from yamt/typo
fix more comment typos
2013-05-06 23:45:17 -07:00
YAMAMOTO Takashi
56dbf7f9be fix more comment typos 2013-05-07 13:56:39 +09:00
INADA Naoki
a2a9a9f4c8 Merge pull request #62 from yamt/comment-typo
fix a typo in a comment
2013-05-01 01:23:41 -07:00
YAMAMOTO Takashi
0c3fecf91b fix a typo in a comment 2013-05-01 12:15:12 +09:00
INADA Naoki
b587bb02c4 Merge pull request #60 from jnothman/patch-2
Remove obsolete StopIteration warning
2013-04-12 00:46:47 -07:00
jnothman
a692bf9852 Remove obsolete StopIteration warning 2013-04-10 21:03:41 +10:00
INADA Naoki
c037aa7710 Merge pull request #59 from msgpack/refactor
Use new style buffer interface.
2013-04-07 19:07:39 -07:00
INADA Naoki
085db7f8dc Use new buffer interface. 2013-04-08 10:57:21 +09:00
INADA Naoki
18215b01bb Unpacker.feed() uses new buffer interface. 2013-04-08 10:52:11 +09:00
INADA Naoki
075dbecc39 Merge pull request #58 from msgpack/refactor
Remove unnecessary type declaration.
2013-04-07 10:03:46 -07:00
INADA Naoki
0faa1bb558 Remove unnecessary type declaration. 2013-04-08 01:57:37 +09:00
INADA Naoki
5c90f953da Add build batch for Windows 2013-03-13 18:15:43 +09:00
INADA Naoki
c9b6e5b65d s/\t/ /g 2013-02-27 21:24:25 +09:00
INADA Naoki
e8f6d2a030 Merge pull request #54 from msgpack/remove-macros
Remove macros for readability.
2013-02-27 04:18:56 -08:00
INADA Naoki
c49489cd37 remove some macros. 2013-02-27 21:12:20 +09:00
INADA Naoki
c91131f49f remove msgpack_pack* macros 2013-02-27 20:37:07 +09:00
INADA Naoki
944b41e826 Merge pull request #53 from jnothman/patch-1
Copy note on OutOfData from pypi to ChangeLog
2013-02-27 00:15:51 -08:00
jnothman
58d8effc35 Copy note on OutOfData from pypi to ChangeLog 2013-02-27 18:26:57 +11:00
INADA Naoki
d2feb13629 doc: remove fallback from api reference. 2013-02-26 14:20:44 +09:00
INADA Naoki
5176e92d99 Fix typeerror. 2013-02-26 09:55:13 +09:00
INADA Naoki
8e13598a36 docs: better unpacker docstring. 2013-02-26 09:49:25 +09:00
INADA Naoki
3ce005cf37 better packer docstring 2013-02-26 09:20:44 +09:00
INADA Naoki
1e38bfa123 fallback: refactor 2013-02-25 18:23:42 +09:00
INADA Naoki
230537cf28 Update docs. 2013-02-25 14:29:49 +09:00
INADA Naoki
7991530cec Update README 2013-02-24 18:06:50 +09:00
INADA Naoki
f4cb6fb877 Add apidoc 2013-02-24 18:06:36 +09:00
INADA Naoki
04e0812ad4 Merge pull request #51 from msgpack/revert-skip-reserved
Revert skipping reserved bytes.
2013-02-23 01:14:13 -08:00
INADA Naoki
38a9ad98c9 Revert skipping reserved byte. 2013-02-23 18:11:46 +09:00
INADA Naoki
a6859791a2 Revert "Skip reserved byte"
This reverts commit ff3342aeed.
2013-02-23 18:01:43 +09:00
INADA Naoki
5c51203d14 Revert "Fix test for Python 3."
This reverts commit 43dd224d52.
2013-02-23 18:00:54 +09:00
INADA Naoki
f0fd90a759 Fix exception incompatibility. 2013-02-22 17:41:52 +09:00
INADA Naoki
d766820421 Fix easy bug. 2013-02-21 16:55:42 +09:00
INADA Naoki
cd3590e785 Merge pull request #50 from msgpack/skip-reserved
Unpacker skip reserved bytes
2013-02-20 23:07:57 -08:00
INADA Naoki
43dd224d52 Fix test for Python 3. 2013-02-21 16:04:58 +09:00
INADA Naoki
ff3342aeed Skip reserved byte 2013-02-21 16:02:33 +09:00
INADA Naoki
9524033194 skip reserved byte. 2013-02-21 14:01:12 +09:00
INADA Naoki
1532eaa684 Merge pull request #46 from alex/patch-1
On PyPy, preallocate lists
2013-02-16 16:34:34 -08:00
Alex Gaynor
3f12846d40 On PyPy, preallocate lists
When deserealizing arrays, preallocate the resulting list at the correct size.
2013-02-16 12:08:14 -08:00
INADA Naoki
626ae51017 README: s/nosetest/pytest/ 2013-02-16 14:03:39 +09:00
INADA Naoki
df6449f173 0.3.0 2013-02-16 09:28:29 +09:00
INADA Naoki
301ad4cd54 Merge branch 'master' of github.com:msgpack/msgpack-python 2013-02-04 15:21:15 +09:00
INADA Naoki
6740b90385 Merge branch 'purepython' 2013-02-04 15:16:17 +09:00
INADA Naoki
a865f8f7e9 Use _private names for non public data members. (fix #44) 2013-02-04 15:14:30 +09:00
INADA Naoki
2330e6c7d9 Merge pull request #45 from msgpack/purepython
fallback enhancements.
2013-02-03 08:11:34 -08:00
INADA Naoki
1951b197b5 Skip compile error for extension modules. 2013-02-03 00:52:05 +09:00
INADA Naoki
22920baae6 Fix minor bugs and tuning unpacking dict. 2013-02-03 00:20:00 +09:00
INADA Naoki
0536d1bd0c Don't compile extension module when running on pypy 2013-02-03 00:11:26 +09:00
INADA Naoki
95dfec808a Add simple benchmark. 2013-02-03 00:02:37 +09:00
INADA Naoki
266eaf813d changelog: describe purepython fallback. 2013-01-29 15:13:20 +09:00
INADA Naoki
86983e27bc Add purepython fallback. (Merge branch 'purepython') 2013-01-29 15:12:04 +09:00
INADA Naoki
8d6a387dff fallback: Support Python 3. 2013-01-29 15:10:22 +09:00
INADA Naoki
cbabeebc95 Use MSGPACK_PUREPYTHON envvar to test fallback module 2013-01-29 14:47:16 +09:00
INADA Naoki
328369e52e pep8 friendly. 2013-01-29 14:33:37 +09:00
Bas Westerbaan
4cde7f080c fallback: _fb_read: add fast-path 2013-01-29 03:46:07 +01:00
Bas Westerbaan
d91a0d3d68 Revert "fallback: Use mmap objects instead of strings to unpack"
See next commit.

This reverts commit 770fed6b7f.
2013-01-29 03:45:17 +01:00
Bas Westerbaan
770fed6b7f fallback: Use mmap objects instead of strings to unpack
Signed-off-by: Bas Westerbaan <bas@westerbaan.name>
2013-01-29 03:44:44 +01:00
Bas Westerbaan
b9e9199eea fallback: python3 bugfix for new testcase of d2f549a4
Signed-off-by: Bas Westerbaan <bas@westerbaan.name>
2013-01-29 03:03:13 +01:00
Bas Westerbaan
d2f549a470 fallback: add actual rollback and add a testcase for it
Signed-off-by: Bas Westerbaan <bas@westerbaan.name>
2013-01-29 02:58:26 +01:00
Bas Westerbaan
fb81f80d14 fallback: bugfix in next() 2013-01-29 02:47:41 +01:00
Bas Westerbaan
94925acb12 fallback: do not use dynamic format strings for struct.(un)pack
Increases performance on PyPy.
2013-01-29 02:15:29 +01:00
Bas Westerbaan
af9c9ca2c9 fallback: performance: write(a+b) -> write(a); write(b) 2013-01-29 02:01:34 +01:00
Bas Westerbaan
b940802032 fallback: two fixes for raising ExtraData 2013-01-28 22:29:23 +01:00
Bas Westerbaan
6fa0f46a12 setup: remove Python 2 only syntax 2013-01-28 14:32:01 +01:00
Bas Westerbaan
69ba3c9bf9 fallback: use __pypy__.builders.StringBuilder when available
This increases performance *a lot* on PyPy.

Signed-off-by: Bas Westerbaan <bas@westerbaan.name>
2013-01-28 13:43:39 +01:00
Bas Westerbaan
2627b6ae9f setup: automatically fallback to pure Python module
Signed-off-by: Bas Westerbaan <bas@westerbaan.name>
2013-01-28 12:27:46 +01:00
Bas Westerbaan
6a28b28c63 Add pure Python fallback module
Signed-off-by: Bas Westerbaan <bas@westerbaan.name>
2013-01-28 12:27:24 +01:00
INADA Naoki
5f55e4c6db Switchng to py.test 2012-12-29 11:28:28 +09:00
INADA Naoki
97a9f3f05c Switching to py.test 2012-12-29 11:27:28 +09:00
INADA Naoki
593c832ab0 Use py.test instead of nosetests. 2012-12-29 11:24:25 +09:00
INADA Naoki
d57e369258 Fix unpacker doesn't raise exception for invalid input (Merge branch '0.2'
Fixes #40

Conflicts:
	ChangeLog.rst
	msgpack/_unpacker.pyx
	msgpack/_version.py
2012-12-29 01:43:16 +09:00
INADA Naoki
72416e403c Fix unpacker doesn't raise exception for invalid input. 2012-12-29 01:39:59 +09:00
INADA Naoki
7b11a42825 Update ChangeLog. 2012-12-22 17:13:45 +09:00
INADA Naoki
9dc299bd8d 0.2.4 2012-12-22 17:11:30 +09:00
INADA Naoki
431fe8f9e0 Update changelog 2012-12-22 17:04:53 +09:00
INADA Naoki
d796d696d1 revert unwanted changes. 2012-12-22 13:09:35 +09:00
INADA Naoki
833b85f173 Merge branch '0.2-maint' (fix #39) 2012-12-22 13:08:46 +09:00
INADA Naoki
451631a11a Fixes segfault when Exception raised from hook (fixed #39) 2012-12-22 12:54:01 +09:00
INADA Naoki
ce2c5b22ef Check return value of _end functions. 2012-12-22 12:42:36 +09:00
INADA Naoki
79e44f86c9 Add NULL check. 2012-12-22 12:14:05 +09:00
INADA Naoki
0fa8c102d7 Add test reproducing SEGV 2012-12-22 12:12:32 +09:00
INADA Naoki
647af23373 Merge pull request #38 from msgpack/travis
Use tox on Travis.
2012-12-11 06:00:08 -08:00
INADA Naoki
34611a8ccd Use newer cython. 2012-12-11 22:49:20 +09:00
INADA Naoki
7bebb665fb Set PIP_USE_MIRRORS=true for faster venv creation. 2012-12-11 22:46:28 +09:00
INADA Naoki
1dd9280bff Install tox 2012-12-11 22:42:08 +09:00
INADA Naoki
ef054cef51 fix package name 2012-12-11 22:40:07 +09:00
INADA Naoki
8b27482f5f Use tox on Travis. 2012-12-11 22:36:09 +09:00
INADA Naoki
2ad02bb11a Add Python 3.3 to .travis.yml 2012-12-11 22:30:38 +09:00
INADA Naoki
78c345555b Update .travis.yml 2012-12-11 22:26:23 +09:00
INADA Naoki
685026d2e1 Split _msgpack.pyx (fix #34) 2012-12-11 22:17:36 +09:00
INADA Naoki
1c5b865db3 Update .gitignore 2012-12-11 22:15:53 +09:00
INADA Naoki
b79e5ba4e5 Split _msgpack.pyx 2012-12-11 22:15:21 +09:00
INADA Naoki
eec02b8729 remove unused import 2012-12-11 22:05:48 +09:00
INADA Naoki
280d56eb9b rename _msgpack.pyx => _packer.pyx 2012-12-11 22:05:00 +09:00
INADA Naoki
e9f9e9e83f Don't use c++ on travis. 2012-12-11 09:26:15 +09:00
INADA Naoki
cb7dff3319 Add .travis.yml 2012-12-11 02:59:45 +09:00
INADA Naoki
4a20700e20 prepare 0.2.3 2012-12-11 02:56:20 +09:00
INADA Naoki
6ae363ea27 Update changelog for 0.2.3 2012-12-11 02:52:24 +09:00
INADA Naoki
3478406537 Fix tests. 2012-12-11 02:46:13 +09:00
INADA Naoki
171145e562 Update changelog. 2012-12-10 21:54:17 +09:00
INADA Naoki
4adc6f194d Add autoreset option to Packer. 2012-12-10 21:47:18 +09:00
INADA Naoki
537a2ab3f2 Add Packer.pack_pairs. 2012-12-10 21:26:41 +09:00
INADA Naoki
0c7ab7c344 refactoring: remove pack_define.h 2012-12-10 20:17:18 +09:00
INADA Naoki
0d63c67e98 Merge pull request #37 from msgpack/exceptions
Split exceptions
2012-12-10 03:13:21 -08:00
INADA Naoki
1c0fe10a2f Remove unused UnpackException. 2012-12-10 20:12:38 +09:00
INADA Naoki
30025c7ea0 Improve docstring. 2012-12-10 20:06:00 +09:00
INADA Naoki
ed40c671da pack raise MemoryError when realloc is failed. 2012-12-10 01:42:38 +09:00
INADA Naoki
4480227e06 Improve docstrings. 2012-12-10 00:39:04 +09:00
INADA Naoki
c7161e9403 Add note for StopIteration => OutOfData. 2012-12-10 00:35:26 +09:00
INADA Naoki
219d47503c Split exceptions. 2012-12-10 00:31:19 +09:00
INADA Naoki
dd5c76b955 Add NOTE for changing default value of use_list. 2012-12-07 11:35:16 +09:00
INADA Naoki
f8d7dea762 Merge pull request #35 from jnothman/patch-1
Fix README re default use_list=True
2012-12-06 18:23:04 -08:00
INADA Naoki
4d844df492 Merge pull request #36 from jnothman/patch-3
Warn about StopIteration in README
2012-12-06 18:11:11 -08:00
jnothman
56ef0d07de Warn about StopIteration in README 2012-12-07 11:12:19 +11:00
jnothman
edd2e52373 Fix README re default use_list=True 2012-12-07 00:53:17 +11:00
INADA Naoki
15f309c0b1 Add note about use_list. 2012-12-06 22:26:39 +09:00
INADA Naoki
a1577a8838 Merge branch 'master' of github.com:msgpack/msgpack-python 2012-12-06 22:15:38 +09:00
INADA Naoki
c1d15df87a Add Unpacker.read_bytes().
It reads from inner buffer without unpacking.

Merge remote-tracking branch 'jnothman/patch-2'

Conflicts:
	msgpack/_msgpack.pyx
2012-12-06 22:13:28 +09:00
INADA Naoki
4d0f4a63be Merge pull request #33 from jnothman/readme-updates
Readme updates
2012-12-06 05:10:20 -08:00
Joel Nothman
fc41ed606d Mention CPython and MessagePack in README 2012-12-06 23:44:27 +11:00
Joel Nothman
659d0961a3 Brief mention of Unpacker.feed in README 2012-12-06 23:36:46 +11:00
Joel Nothman
6d4115f64b Minor grammar fixes 2012-12-06 23:36:16 +11:00
Joel Nothman
de3724c1de README documentation of advanced Unpacker features 2012-12-06 23:34:18 +11:00
INADA Naoki
53b67f1449 Add changelog 2012-12-06 21:17:44 +09:00
Joel Nothman
c567ad1c52 Describe object_pairs_hook in README 2012-12-06 23:10:25 +11:00
Joel Nothman
caecc0098e Change Unpacker example to read from stream 2012-12-06 23:01:12 +11:00
INADA Naoki
6b78223231 Fix test failuar. 2012-12-06 19:15:04 +09:00
INADA Naoki
54916f79a5 Merge pull request #23 from jnothman/write_bytes
Allow packed data to be captured while executing skip(), etc.
2012-12-06 01:49:58 -08:00
INADA Naoki
1c389135b8 (travis) Travis doesn't support Python 3.3 yet. 2012-11-07 08:28:30 +09:00
INADA Naoki
b14caa419c Use system cython. 2012-11-07 08:15:46 +09:00
INADA Naoki
67d8cc6c4f (travis) Python 2.5 can't pass tests.
Python 2.5 doesn't have b'' literal.
2012-11-07 02:24:26 +09:00
INADA Naoki
ec655b9f2c Fix segmentation fault. 2012-11-07 02:23:57 +09:00
INADA Naoki
df6b969a8d (travis) Fix test script 2012-11-07 02:04:42 +09:00
INADA Naoki
dffc89ef0b Merge branch 'master' of github.com:msgpack/msgpack-python 2012-11-07 02:02:31 +09:00
INADA Naoki
0ef52869e3 Fix unpack error on Python 3.2.
ctx.user.encoding and ctx.user.unicode_errors may refer to deallocated string.
2012-11-07 02:00:08 +09:00
INADA Naoki
e404c9845f (travis) Install itself. 2012-11-06 10:39:10 +09:00
INADA Naoki
eb61b4de9e (travis) Install msgpack-python itself. 2012-11-06 09:46:02 +09:00
INADA Naoki
c75ef976d7 Fix build error on first time. 2012-11-06 09:37:55 +09:00
INADA Naoki
dbf50c9f78 Add travis status. 2012-11-06 09:35:06 +09:00
INADA Naoki
3ec2e6e729 Add six to .travis.yml 2012-11-06 02:32:59 +09:00
INADA Naoki
2f0078d395 Add travis config. 2012-11-06 02:10:36 +09:00
INADA Naoki
1e17642264 Merge pull request #31 from seliopou/master
Fix Unpacker example in README
2012-11-01 21:24:41 -07:00
Spiros Eliopoulos
d025d90882 Put Unpacker read loop back into buffer read loop
So it works for streaming as intended.
2012-11-01 11:29:33 -04:00
Spiros Eliopoulos
30233a5a99 Fix Unpacker example in README
The example did not properly deserialize, since it was dropping bytes
from the input stream.
2012-11-01 00:55:33 -04:00
Spiros Eliopoulos
62e8f40024 Fix typo in README
seed -> feed
2012-11-01 00:41:06 -04:00
INADA Naoki
cf82b91c1a Merge pull request #30 from drednout/docs_datetime
README improvement:  example of usage default/object_hook
2012-10-12 05:49:32 -07:00
INADA Naoki
8f82252687 Merge pull request #29 from drednout/master
Fix for an issue #28
2012-10-12 05:48:17 -07:00
Alexei Romanoff
fa1c4745ec README formatting has been improved 2012-10-12 15:25:14 +03:00
Alexei Romanoff
541d22d004 Added example of using default/object_hook into README 2012-10-12 13:32:29 +03:00
Alexei Romanoff
cf89f18be7 segfault fixed when data is unpacked using list_hook,
this bug is a twin to #28.
Unit-test is also attached.
2012-10-12 13:19:53 +03:00
Alexei Romanoff
4ea952f39d Added unit-test for issue https://github.com/msgpack/msgpack-python/issues/28 2012-10-12 12:34:18 +03:00
Alexei Romanoff
89ce16df39 A segfault fixed in the issue https://github.com/msgpack/msgpack-python/issues/28 2012-10-12 12:32:32 +03:00
Joel Nothman
df4f23779d Merge commit 'd5f9995' into read_bytes
Conflicts:
	msgpack/_msgpack.pyx
2012-10-04 11:31:40 +10:00
Joel Nothman
87f292cbf9 Allow packed data to be captured while executing skip(), etc. 2012-10-04 11:26:29 +10:00
INADA Naoki
d5f99959cc Fix some test failure. 2012-10-01 01:34:58 +09:00
INADA Naoki
e016b3dca0 Merge remote-tracking branch 'jnothman/read_size_cpp'
Conflicts:
	msgpack/_msgpack.pyx
	setup.py
2012-10-01 01:31:58 +09:00
Joel Nothman
9d9c3eecb8 Packer.pack_array/map_header to correspond to read functions 2012-09-25 01:19:10 +10:00
Joel Nothman
0431a766f4 read_array/map_header functionality 2012-09-25 01:19:07 +10:00
Joel Nothman
d56e2b2c8a Use C++ function templating for skip()/construct() 2012-09-25 00:30:15 +10:00
INADA Naoki
1526316a08 Merge branch '0.2-maint'
Conflicts:
	msgpack/_msgpack.pyx
2012-09-24 03:10:37 +09:00
INADA Naoki
477d3b152f Fix warnings. 2012-09-24 03:08:13 +09:00
INADA Naoki
e381032641 Support object_pairs_hook
Merge remote-tracking branch 'jnothman/object_pairs_hook' into 0.2-maint
Conflicts:
	msgpack/_msgpack.pyx
	test/test_pack.py
	test/test_sequnpack.py
2012-09-24 03:05:39 +09:00
INADA Naoki
927d29131d Write about warning in changelog. 2012-09-24 02:55:50 +09:00
INADA Naoki
d13f10c02e Start 0.3.0dev1 2012-09-24 02:46:34 +09:00
INADA Naoki
ac403ef68d Start 0.2.3dev 2012-09-24 02:45:37 +09:00
INADA Naoki
15a46eb143 use_list=1 is default 2012-09-24 02:42:38 +09:00
INADA Naoki
1c6ef90d40 Merge branch '0.2-maint'
Conflicts:
	test/test_sequnpack.py
2012-09-24 02:40:39 +09:00
INADA Naoki
c280e58988 Fix warnings in tests. 2012-09-24 02:39:02 +09:00
INADA Naoki
d503788e95 Warn when use_list is not specified.
Conflicts:
	test/test_sequnpack.py
2012-09-24 02:38:54 +09:00
INADA Naoki
c2a2d417f1 Fix warnings in tests. 2012-09-24 02:20:53 +09:00
INADA Naoki
60df5eadaf Warn when use_list is not specified. 2012-09-24 02:12:55 +09:00
Joel Nothman
e7c51d9089 Cleaner read_bytes and a test case
No longer reads via buffer for unbuffered bytes
2012-09-23 20:46:49 +10:00
Joel Nothman
77942514db Implement object_pairs_hook 2012-09-23 19:37:28 +10:00
Joel Nothman
b06ed8eb75 Factor context initialisation from unpackb and Unpacker 2012-09-23 19:36:27 +10:00
INADA Naoki
96ed236c1d Merge branch '0.2-maint' 2012-09-23 11:22:13 +09:00
INADA Naoki
c3da845868 Add docstring about raising ValueError when there are extra bytes. 2012-09-23 11:16:59 +09:00
INADA Naoki
48d693c1b9 Add test for .skip() 2012-09-23 10:09:51 +09:00
INADA Naoki
9963522d46 Add .skip() method to Unpacker. (Merge branch 'skip') 2012-09-23 10:04:41 +09:00
INADA Naoki
7d142d2bef Add changelog 2012-09-23 10:02:11 +09:00
INADA Naoki
eaf9891b42 clean some cython code. 2012-09-23 10:00:18 +09:00
INADA Naoki
65f582345c Merge branch 'skip' of git://github.com/jnothman/msgpack-python into skip 2012-09-23 09:08:21 +09:00
INADA Naoki
e8842efded Add py33 to tox. 2012-09-23 08:57:32 +09:00
INADA Naoki
36b88b4077 Add Roadmap. 2012-09-23 03:44:41 +09:00
INADA Naoki
8b2959bc0a pack and packb raises ValueError when extra data passed. 2012-09-23 03:39:14 +09:00
INADA Naoki
4d643894a1 Support packing subclass of dict. 2012-09-23 02:13:32 +09:00
Joel Nothman
032df6f2d9 Merge remote-tracking branch 'origin/master' into skip 2012-09-22 22:58:46 +10:00
Joel Nothman
28058fb53d A first implementation of Unpacker.skip() 2012-09-22 22:57:00 +10:00
jnothman
ffec10dff3 Expose packed stream with Unpacker.read_bytes()
At present, Unpacker buffers reading from the stream, meaning the stream can no longer be read directly. Unpacker.read_bytes(n) provides access to the underlying data, allowing content of known size to be read without unpacking.
2012-09-21 16:03:41 +10:00
INADA Naoki
5b66edaa15 0.2.2 (again) 2012-09-21 14:17:34 +09:00
INADA Naoki
be405ec5cf 0.2.2 2012-09-21 14:16:40 +09:00
INADA Naoki
51335bbee4 packb supports use_single_float option. 2012-09-21 14:15:30 +09:00
INADA Naoki
397d772e11 Rename use_float to use_single_float. 2012-09-21 14:08:34 +09:00
INADA Naoki
3b45a51d61 Merge branch 'master' of github.com:msgpack/msgpack-python 2012-09-21 13:59:35 +09:00
INADA Naoki
0297b36bda Fix reading more than read_size. 2012-09-21 13:58:56 +09:00
INADA Naoki
f14d926e1d Merge pull request #11 from TobiasSimon/float_ext
added float serialization support
2012-08-28 04:02:17 -07:00
INADA Naoki
56ec7ee1b1 Update changelog 2012-08-24 10:05:38 +09:00
INADA Naoki
235b928be7 Stop disable/enable gc.
json and pickle modules don't stop gc. It's a very dirty hack.
2012-08-24 09:53:18 +09:00
INADA Naoki
6aa4aead31 Fix build from pyx doesn't work. 2012-08-21 14:56:32 +09:00
TobiasSimon
e63a943753 added float serialization support 2012-08-20 21:56:55 +02:00
INADA Naoki
f74ce3caaa 0.2.1 2012-08-20 00:11:38 +09:00
INADA Naoki
814c42c291 Change the way to manage version number. 2012-08-19 04:17:56 +09:00
INADA Naoki
29b4b785d0 Fix build_ext doesn't work on Python 3. 2012-08-19 03:04:19 +09:00
INADA Naoki
670bb3ca15 Use C++ compiler on win32. 2012-08-19 02:53:16 +09:00
INADA Naoki
7c03f322fa Fix on SPARC Solaris.
Use C++ only on Windows.
Define ENDIAN macros from `sys.byteorder`.
2012-08-19 01:05:41 +09:00
INADA Naoki
bf4124f592 Fix setup.py sdist doesn't generates c++ source. 2012-07-25 09:19:10 +09:00
INADA Naoki
59c8b51e5b Default value of read_size is min(1024**2, max_buffer_size) 2012-07-20 02:05:43 +09:00
INADA Naoki
53ca2bb648 raise ValueError when read_size > max_buffer_size. 2012-07-20 02:02:54 +09:00
INADA Naoki
e5462ff72f Add test for max_buffer_size. 2012-07-20 02:02:37 +09:00
INADA Naoki
7b1167044b Add max_buffer_size to Unpacker. 2012-07-13 21:28:16 +09:00
INADA Naoki
e133c7fd27 Start 0.2.1 2012-07-13 21:15:11 +09:00
INADA Naoki
2122b46b84 Fix using deprecated api in tests. 2012-07-04 14:58:36 +09:00
INADA Naoki
4bff55db9f Fix rst syntax. 2012-06-27 18:25:56 +09:00
INADA Naoki
8514871c9b release 0.2.0 2012-06-27 18:23:47 +09:00
INADA Naoki
002a941b43 Use setuptools to build egg package. 2012-06-27 18:16:59 +09:00
INADA Naoki
52a7c02a50 update .gitignore 2012-06-27 18:15:44 +09:00
INADA Naoki
dd5b1e265a remove deprecated api. 2012-06-27 18:07:02 +09:00
INADA Naoki
288e820293 prepare 0.2 2012-06-27 18:05:35 +09:00
INADA Naoki
06ed24a529 Fix setup scripts.
Support _msgpack.cpp
2012-06-26 17:37:22 +09:00
INADA Naoki
ebe4c1f4bc manage to compile on windows
Use C++ compiler to build.
2012-06-26 17:27:29 +09:00
INADA Naoki
58eb7d0ce8 Update changelog 2012-06-26 15:34:51 +09:00
INADA Naoki
188da01777 Fix new version of msgpack. 2012-06-26 15:19:59 +09:00
INADA Naoki
812c8bcff4 Update msgpack version. 2012-06-26 13:21:10 +09:00
INADA Naoki
5b0353eac6 Start 0.1.14dev 2012-06-19 14:31:32 +09:00
INADA Naoki
636f4529aa Fix tests to pass. 2012-06-19 14:20:56 +09:00
INADA Naoki
0b38e86534 unify tests for py2 and py3 2012-06-19 13:55:14 +09:00
INADA Naoki
76f34667a0 Add ".tox" to .gitignore 2012-06-19 13:40:00 +09:00
INADA Naoki
f1dd03fe80 Add test for subtype. 2012-06-19 13:39:32 +09:00
INADA Naoki
40d4b8946b Start using tox 2012-06-19 13:39:23 +09:00
INADA Naoki
b95ea1467f Merge pull request #8 from steeve/patch-1
Make sure objects inherited from Dict are properly casted
2012-06-18 21:34:26 -07:00
Steeve Morin
07506667c9 Make sure objects inherited from Dict are properly casted (or else Cython will complain and crash). 2012-06-16 13:56:46 +03:00
INADA Naoki
7a4af28fa1 release 0.1.13 2012-04-21 14:25:07 +09:00
INADA Naoki
91a1abb737 Merge branch 'master' of github.com:msgpack/msgpack-python 2012-03-08 16:59:56 +09:00
INADA Naoki
64bdf6bcd6 small optimization. 2012-03-08 16:59:08 +09:00
methane
02f01f60fc Merge pull request #7 from steeve/patch-3
Fix massive memory leak with object_hook and list_hook when unpacking.
2012-02-28 08:19:29 -08:00
Steeve Morin
a5bc6b7385 Better prototypes. 2012-02-28 15:41:44 +01:00
Steeve Morin
31b7fda17b Fix massive memory leak with object_hook and list_hook when unpacking. 2012-02-28 15:36:58 +01:00
methane
3a472b1624 Merge pull request #6 from steeve/patch-2
Be greedier when checking for tuples or lists.
2012-02-16 11:01:25 -08:00
Steeve Morin
5b878b6038 Be greedier when checking for tuples or lists. 2012-02-16 17:15:04 +01:00
methane
c2e297d5dd Merge pull request #2 from urso/master
adopt setup.py to work with python older 2.7
2012-02-10 21:37:32 -08:00
methane
1485b998a4 Merge pull request #3 from wolever/patch-1
Correcting 'utf-8' to 'unicode'.
2012-02-10 21:36:18 -08:00
David Wolever
b764169775 Correcting 'utf-8' to 'unicode'. 2012-02-10 15:08:49 -05:00
Steffen Siering
d685614138 adopt setup.py to work with python installation older 2.7 2012-01-18 19:02:11 +01:00
67 changed files with 6626 additions and 2229 deletions

33
.github/workflows/docs.yaml vendored Normal file
View file

@ -0,0 +1,33 @@
name: docs
on: ["push", "pull_request"]
jobs:
docs:
# We want to run on external PRs, but not on our own internal PRs as they'll be run
# by the push to the branch.
if: github.event_name == 'push' || github.event.pull_request.head.repo.full_name != github.repository
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.x'
cache: "pip"
cache-dependency-path: |
requirements.txt
docs/requirements.txt
- name: Build
run: |
pip install -r requirements.txt
make cython
- name: Sphinx Documentation Generator
run: |
pip install -r docs/requirements.txt
make docs

22
.github/workflows/lint.yaml vendored Normal file
View file

@ -0,0 +1,22 @@
name: lint
on: ["push", "pull_request"]
jobs:
lint:
# We want to run on external PRs, but not on our own internal PRs as they'll be run
# by the push to the branch.
if: github.event_name == 'push' || github.event.pull_request.head.repo.full_name != github.repository
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: ruff check
run: |
pipx run ruff check --diff msgpack/ test/ setup.py
- name: ruff format
run: |
pipx run ruff format --diff msgpack/ test/ setup.py

61
.github/workflows/test.yml vendored Normal file
View file

@ -0,0 +1,61 @@
name: Run tests
on:
push:
branches: [main]
pull_request:
create:
jobs:
test:
strategy:
matrix:
os: ["ubuntu-latest", "windows-latest", "windows-11-arm", "macos-latest"]
py: ["3.14", "3.14t", "3.13", "3.12", "3.11", "3.10"]
exclude:
- os: windows-11-arm
py: "3.10"
runs-on: ${{ matrix.os }}
name: Run test with Python ${{ matrix.py }} on ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@v5
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: ${{ matrix.py }}
allow-prereleases: true
cache: "pip"
- name: Prepare
shell: bash
run: |
python -m pip install -r requirements.txt pytest
- name: Build
shell: bash
run: |
make cython
pip install .
- name: Test (C extension)
shell: bash
run: |
pytest -v test
- name: Test (pure Python fallback)
shell: bash
run: |
MSGPACK_PUREPYTHON=1 pytest -v test
- name: build packages
shell: bash
run: |
python -m build -nv
- name: upload packages
uses: actions/upload-artifact@v4
with:
name: dist-${{ matrix.os }}-${{ matrix.py }}
path: dist

88
.github/workflows/wheel.yml vendored Normal file
View file

@ -0,0 +1,88 @@
name: Build sdist and Wheels
on:
push:
branches: [main]
release:
types:
- published
workflow_dispatch:
jobs:
build_wheels:
strategy:
matrix:
# macos-13 is for intel
os: ["ubuntu-24.04", "ubuntu-24.04-arm", "windows-latest", "windows-11-arm", "macos-13", "macos-latest"]
runs-on: ${{ matrix.os }}
name: Build wheels on ${{ matrix.os }}
steps:
- uses: actions/checkout@v6
- uses: actions/setup-python@v6
with:
python-version: "3.x"
cache: "pip"
- name: Cythonize
shell: bash
run: |
pip install -r requirements.txt
make cython
- name: Build
uses: pypa/cibuildwheel@v3.3.0
env:
CIBW_TEST_REQUIRES: "pytest"
CIBW_TEST_COMMAND: "pytest {package}/test"
CIBW_SKIP: "pp* cp38-* cp39-* cp310-win_arm64"
- name: Build sdist
if: runner.os == 'Linux' && runner.arch == 'X64'
run: |
pip install build
python -m build -s -o wheelhouse
- name: Upload Wheels to artifact
uses: actions/upload-artifact@v4
with:
name: wheels-${{ matrix.os }}
path: wheelhouse
# combine all wheels into one artifact
combine_wheels:
needs: [build_wheels]
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v4
with:
# unpacks all CIBW artifacts into dist/
pattern: wheels-*
path: dist
merge-multiple: true
- name: Upload Wheels to artifact
uses: actions/upload-artifact@v4
with:
name: wheels-all
path: dist
# https://github.com/pypa/cibuildwheel/blob/main/examples/github-deploy.yml
upload_pypi:
needs: [build_wheels]
runs-on: ubuntu-latest
environment: pypi
permissions:
id-token: write
if: github.event_name == 'release' && github.event.action == 'published'
# or, alternatively, upload to PyPI on every tag starting with 'v' (remove on: release above to use this)
# if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v')
steps:
- uses: actions/download-artifact@v4
with:
# unpacks all CIBW artifacts into dist/
pattern: wheels-*
path: dist
merge-multiple: true
- uses: pypa/gh-action-pypi-publish@release/v1
#with:
# To test: repository-url: https://test.pypi.org/legacy/

11
.gitignore vendored
View file

@ -1,8 +1,17 @@
MANIFEST
build/*
dist/*
.tox
.python-version
*.pyc
*.pyo
*.so
*~
msgpack/__version__.py
msgpack/_msgpack.c
msgpack/*.c
msgpack/*.cpp
*.egg-info
/venv
/tags
/docs/_build
.cache

24
.readthedocs.yaml Normal file
View file

@ -0,0 +1,24 @@
# Read the Docs configuration file for Sphinx projects.
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details.
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.11"
apt_packages:
- build-essential
jobs:
pre_install:
- pip install -r requirements.txt
- make cython
python:
install:
- method: pip
path: .
- requirements: docs/requirements.txt
sphinx:
configuration: docs/conf.py

View file

@ -1,19 +1,561 @@
1.1.2
=====
Release Date: 2025-10-08
This release does not change source code. It updates only building wheels:
* Update Cython to v3.1.4
* Update cibuildwheel to v3.2.0
* Drop Python 3.8
* Add Python 3.14
* Add windows-arm
1.1.1
=====
Release Date: 2025-06-13
* No change from 1.1.1rc1.
1.1.1rc1
========
Release Date: 2025-06-06
* Update Cython to 3.1.1 and cibuildwheel to 2.23.3.
1.1.0
=====
Release Date: 2024-09-10
* use ``PyLong_*`` instead of ``PyInt_*`` for compatibility with
future Cython. (#620)
1.1.0rc2
========
Release Date: 2024-08-19
* Update Cython to 3.0.11 for better Python 3.13 support.
* Update cibuildwheel to 2.20.0 to build Python 3.13 wheels.
1.1.0rc1
========
Release Date: 2024-05-07
* Update Cython to 3.0.10 to reduce C warnings and future support for Python 3.13.
* Stop using C++ mode in Cython to reduce compile error on some compilers.
* ``Packer()`` has ``buf_size`` option to specify initial size of
internal buffer to reduce reallocation.
* The default internal buffer size of ``Packer()`` is reduced from
1MiB to 256KiB to optimize for common use cases. Use ``buf_size``
if you are packing large data.
* ``Timestamp.to_datetime()`` and ``Timestamp.from_datetime()`` become
more accurate by avoiding floating point calculations. (#591)
* The Cython code for ``Unpacker`` has been slightly rewritten for maintainability.
* The fallback implementation of ``Packer()`` and ``Unpacker()`` now uses keyword-only
arguments to improve compatibility with the Cython implementation.
1.0.8
=====
Release Date: 2024-03-01
* Update Cython to 3.0.8. This fixes memory leak when iterating
``Unpacker`` object on Python 3.12.
* Do not include C/Cython files in binary wheels.
1.0.7
=====
Release Date: 2023-09-28
* Fix build error of extension module on Windows. (#567)
* ``setup.py`` doesn't skip build error of extension module. (#568)
1.0.6
=====
Release Date: 2023-09-21
.. note::
v1.0.6 Wheels for Windows don't contain extension module.
Please upgrade to v1.0.7 or newer.
* Add Python 3.12 wheels (#517)
* Remove Python 2.7, 3.6, and 3.7 support
1.0.5
=====
Release Date: 2023-03-08
* Use ``__BYTE_ORDER__`` instead of ``__BYTE_ORDER`` for portability. (#513, #514)
* Add Python 3.11 wheels (#517)
* fallback: Fix packing multidimensional memoryview (#527)
1.0.4
=====
Release Date: 2022-06-03
* Support Python 3.11 (beta).
* Don't define `__*_ENDIAN__` macro on Unix. by @methane in https://github.com/msgpack/msgpack-python/pull/495
* Use PyFloat_Pack8() on Python 3.11a7 by @vstinner in https://github.com/msgpack/msgpack-python/pull/499
* Fix Unpacker max_buffer_length handling by @methane in https://github.com/msgpack/msgpack-python/pull/506
1.0.3
=====
Release Date: 2021-11-24 JST
* Fix Docstring (#459)
* Fix error formatting (#463)
* Improve error message about strict_map_key (#485)
1.0.2
=====
* Fix year 2038 problem regression in 1.0.1. (#451)
1.0.1
=====
* Add Python 3.9 and linux/arm64 wheels. (#439)
* Fixed Unpacker.tell() after read_bytes() (#426)
* Fixed unpacking datetime before epoch on Windows (#433)
* Fixed fallback Packer didn't check DateTime.tzinfo (#434)
1.0.0
=====
Release Date: 2020-02-17
* Remove Python 2 support from the ``msgpack/_cmsgpack``.
``msgpack/fallback`` still supports Python 2.
* Remove ``encoding`` option from the Packer and Unpacker.
* Unpacker: The default value of ``max_buffer_size`` is changed to 100MiB.
* Unpacker: ``strict_map_key`` is True by default now.
* Unpacker: String map keys are interned.
* Drop old buffer protocol support.
* Support Timestamp type.
* Support serializing and decerializing ``datetime`` object
with tzinfo.
* Unpacker: ``Fix Unpacker.read_bytes()`` in fallback implementation. (#352)
0.6.2
=====
Release Date: 2019-09-20
* Support Python 3.8.
* Update Cython to 0.29.13 for support Python 3.8.
* Some small optimizations.
0.6.1
======
Release Date: 2019-01-25
This release is for mitigating pain caused by v0.6.0 reduced max input limits
for security reason.
* ``unpackb(data)`` configures ``max_*_len`` options from ``len(data)``,
instead of static default sizes.
* ``Unpacker(max_buffer_len=N)`` configures ``max_*_len`` options from ``N``,
instead of static default sizes.
* ``max_bin_len``, ``max_str_len``, and ``max_ext_len`` are deprecated.
Since this is minor release, it's document only deprecation.
0.6.0
======
Release Date: 2018-11-30
This release contains some backward incompatible changes for security reason (DoS).
Important changes
-----------------
* unpacker: Default value of input limits are smaller than before to avoid DoS attack.
If you need to handle large data, you need to specify limits manually. (#319)
* Unpacker doesn't wrap underlying ``ValueError`` (including ``UnicodeError``) into
``UnpackValueError``. If you want to catch all exception during unpack, you need
to use ``try ... except Exception`` with minimum try code block. (#323, #233)
* ``PackValueError`` and ``PackOverflowError`` are also removed. You need to catch
normal ``ValueError`` and ``OverflowError``. (#323, #233)
* Unpacker has ``strict_map_key`` option now. When it is true, only bytes and str
(unicode in Python 2) are allowed for map keys. It is recommended to avoid
hashdos. Default value of this option is False for backward compatibility reason.
But it will be changed True in 1.0. (#296, #334)
Other changes
-------------
* Extension modules are merged. There is ``msgpack._cmsgpack`` instead of
``msgpack._packer`` and ``msgpack._unpacker``. (#314, #328)
* Add ``Unpacker.getbuffer()`` method. (#320)
* unpacker: ``msgpack.StackError`` is raised when input data contains too
nested data. (#331)
* unpacker: ``msgpack.FormatError`` is raised when input data is not valid
msgpack format. (#331)
0.5.6
======
* Fix fallback.Unpacker.feed() dropped unused data from buffer (#287)
* Resurrect fallback.unpack() and _unpacker.unpack().
They were removed at 0.5.5 but it breaks backward compatibility. (#288, #290)
0.5.5
======
* Fix memory leak in pure Python Unpacker.feed() (#283)
* Fix unpack() didn't support `raw` option (#285)
0.5.4
======
* Undeprecate ``unicode_errors`` option. (#278)
0.5.3
======
* Fixed regression when passing ``unicode_errors`` to Packer but not ``encoding``. (#277)
0.5.2
======
* Add ``raw`` option to Unpacker. It is preferred way than ``encoding`` option.
* Packer.pack() reset buffer on exception (#274)
0.5.1
======
* Remove FutureWarning about use_bin_type option (#271)
0.5.0
======
There are some deprecations. Please read changes carefully.
Changes
-------
* Drop Python 2.6 and ~3.4 support. Python 2.7 and 3.5+ are supported.
* Deprecate useless custom exceptions. Use ValueError instead of PackValueError,
Exception instead of PackException and UnpackException, etc...
See msgpack/exceptions.py
* Add *strict_types* option to packer. It can be used to serialize subclass of
builtin types. For example, when packing object which type is subclass of dict,
``default()`` is called. ``default()`` is called for tuple too.
* Pure Python implementation supports packing memoryview object.
* Support packing bytearray.
* Add ``Unpacker.tell()``. And ``write_bytes`` option is deprecated.
Bugs fixed
----------
* Fixed zero length raw can't be decoded when encoding is specified. (#236)
0.4.8
=====
:release date: 2016-07-29
Bugs fixed
----------
* Calling ext_hook with wrong length. (Only on Windows, maybe. #203)
0.4.7
=====
:release date: 2016-01-25
Bugs fixed
----------
* Memory leak when unpack is failed
Changes
-------
* Reduce compiler warnings while building extension module
* unpack() now accepts ext_hook argument like Unpacker and unpackb()
* Update Cython version to 0.23.4
* default function is called when integer overflow
0.4.6
=====
:release date: 2015-03-13
Bugs fixed
----------
* fallback.Unpacker: Fix Data corruption when OutOfData.
This bug only affects "Streaming unpacking."
0.4.5
=====
:release date: 2015-01-25
Incompatible Changes
--------------------
Changes
-------
Bugs fixed
----------
* Fix test failure on pytest 2.3. (by @ktdreyer)
* Fix typos in ChangeLog. (Thanks to @dmick)
* Improve README.rst (by @msabramo)
0.4.4
=====
:release date: 2015-01-09
Incompatible Changes
--------------------
Changes
-------
Bugs fixed
----------
* Fix compile error.
0.4.3
=====
:release date: 2015-01-07
Incompatible Changes
--------------------
Changes
-------
Bugs fixed
----------
* Unpacker may unpack wrong uint32 value on 32bit or LLP64 environment. (#101)
* Build failed on Windows Python 2.7.
0.4.2
=====
:release date: 2014-03-26
Incompatible Changes
--------------------
Changes
-------
Bugs fixed
----------
* Unpacker doesn't increment refcount of ExtType hook.
* Packer raises no exception for inputs doesn't fit to msgpack format.
0.4.1
=====
:release date: 2014-02-17
Incompatible Changes
--------------------
Changes
-------
* fallback.Unpacker.feed() supports bytearray.
Bugs fixed
----------
* Unpacker doesn't increment refcount of hooks. Hooks may be GCed while unpacking.
* Unpacker may read unfilled internal buffer.
0.4.0
=====
:release date: 2013-10-21
Incompatible Changes
--------------------
* Raises TypeError instead of ValueError when packer receives unsupported type.
Changes
-------
* Support New msgpack spec.
0.3.0
=====
Incompatible Changes
--------------------
* Default value of ``use_list`` is ``True`` for now. (It was ``False`` for 0.2.x)
You should pass it explicitly for compatibility to 0.2.x.
* `Unpacker.unpack()` and some unpack methods now raise `OutOfData` instead of
`StopIteration`. `StopIteration` is used for iterator protocol only.
Changes
-------
* Pure Python fallback module is added. (thanks to bwesterb)
* Add ``.skip()`` method to ``Unpacker`` (thanks to jnothman)
* Add capturing feature. You can pass the writable object to
``Unpacker.unpack()`` as a second parameter.
* Add ``Packer.pack_array_header`` and ``Packer.pack_map_header``.
These methods only pack header of each type.
* Add ``autoreset`` option to ``Packer`` (default: True).
Packer doesn't return packed bytes and clear internal buffer.
* Add ``Packer.pack_map_pairs``. It packs sequence of pair to map type.
0.2.4
=====
:release date: 2012-12-22
Bugs fixed
----------
* Fix SEGV when object_hook or object_pairs_hook raise Exception. (#39)
0.2.3
=====
:release date: 2012-12-11
Changes
-------
* Warn when use_list is not specified. It's default value will be changed in 0.3.
Bugs fixed
----------
* Can't pack subclass of dict.
0.2.2
=====
:release date: 2012-09-21
Changes
-------
* Add ``use_single_float`` option to ``Packer``. When it is true, packs float
object in single precision format.
Bugs fixed
----------
* ``unpack()`` didn't restores gc state when it called with gc disabled.
``unpack()`` doesn't control gc now instead of restoring gc state collectly.
User can control gc state when gc cause performance issue.
* ``Unpacker``'s ``read_size`` option didn't used.
0.2.1
=====
:release date: 2012-08-20
Changes
-------
* Add ``max_buffer_size`` parameter to Unpacker. It limits internal buffer size
and allows unpack data from untrusted source safely.
* Unpacker's buffer reallocation algorithm is less greedy now. It cause performance
decrease in rare case but memory efficient and don't allocate than ``max_buffer_size``.
Bugs fixed
----------
* Fix msgpack didn't work on SPARC Solaris. It was because choosing wrong byteorder
on compilation time. Use ``sys.byteorder`` to get correct byte order.
Very thanks to Chris Casey for giving test environment to me.
0.2.0
=====
:release date: 2012-06-27
Changes
-------
* Drop supporting Python 2.5 and unify tests for Py2 and Py3.
* Use new version of msgpack-c. It packs correctly on big endian platforms.
* Remove deprecated packs and unpacks API.
Bugs fixed
----------
* #8 Packing subclass of dict raises TypeError. (Thanks to Steeve Morin.)
0.1.13
======
:release date: 2012-04-21
New
---
* Don't accept subtype of list and tuple as msgpack list. (Steeve Morin)
It allows customize how it serialized with ``default`` argument.
Bugs fixed
----------
* Fix wrong error message. (David Wolever)
* Fix memory leak while unpacking when ``object_hook`` or ``list_hook`` is used.
(Steeve Morin)
Other changes
-------------
* setup.py works on Python 2.5 (Steffen Siering)
* Optimization for serializing dict.
0.1.12
=======
======
:release date: 2011-12-27
Bugs fixed
-------------
----------
* Re-enable packs/unpacks removed at 0.1.11. It will be removed when 0.2 is released.
0.1.11
=======
======
:release date: 2011-12-26
Bugs fixed
-------------
----------
* Include test code for Python3 to sdist. (Johan Bergström)
* Fix compilation error on MSVC. (davidgaleano)
@ -31,7 +573,7 @@ New feature
0.1.9
======
=====
:release date: 2011-01-29
New feature
@ -45,16 +587,16 @@ Bugs fixed
* Add MemoryError check.
0.1.8
======
=====
:release date: 2011-01-10
New feature
------------
-----------
* Support ``loads`` and ``dumps`` aliases for API compatibility with
simplejson and pickle.
* Add *object_hook* and *list_hook* option to unpacker. It allows you to
hook unpacing mapping type and array type.
hook unpacking mapping type and array type.
* Add *default* option to packer. It allows you to pack unsupported types.
@ -66,13 +608,13 @@ Bugs fixed
0.1.7
======
=====
:release date: 2010-11-02
New feature
------------
-----------
* Add *object_hook* and *list_hook* option to unpacker. It allows you to
hook unpacing mapping type and array type.
hook unpacking mapping type and array type.
* Add *default* option to packer. It allows you to pack unsupported types.

17
DEVELOP.md Normal file
View file

@ -0,0 +1,17 @@
# Developer's note
### Build
```
$ make cython
```
### Test
MessagePack uses `pytest` for testing.
Run test with following command:
```
$ make test
```

View file

@ -1,5 +1,5 @@
include setup.py
include COPYING
include README.md
recursive-include msgpack *.h *.c *.pyx
recursive-include test *.py
recursive-include test3 *.py

View file

@ -1,12 +1,59 @@
.PHONY: test all python3
PYTHON_SOURCES = msgpack test setup.py
all:
.PHONY: all
all: cython
python setup.py build_ext -i -f
python setup.py build sdist
python3:
python3 setup.py build_ext -i -f
python3 setup.py build sdist
.PHONY: format
format:
ruff format $(PYTHON_SOURCES)
test:
nosetests test
.PHONY: lint
lint:
ruff check $(PYTHON_SOURCES)
.PHONY: doc
doc:
cd docs && sphinx-build -n -v -W --keep-going -b html -d doctrees . html
.PHONY: pyupgrade
pyupgrade:
@find $(PYTHON_SOURCES) -name '*.py' -type f -exec pyupgrade --py37-plus '{}' \;
.PHONY: cython
cython:
cython msgpack/_cmsgpack.pyx
.PHONY: test
test: cython
pip install -e .
pytest -v test
MSGPACK_PUREPYTHON=1 pytest -v test
.PHONY: serve-doc
serve-doc: all
cd docs && make serve
.PHONY: clean
clean:
rm -rf build
rm -f msgpack/_cmsgpack.cpp
rm -f msgpack/_cmsgpack.*.so
rm -f msgpack/_cmsgpack.*.pyd
rm -rf msgpack/__pycache__
rm -rf test/__pycache__
.PHONY: update-docker
update-docker:
docker pull quay.io/pypa/manylinux2014_i686
docker pull quay.io/pypa/manylinux2014_x86_64
docker pull quay.io/pypa/manylinux2014_aarch64
.PHONY: linux-wheel
linux-wheel:
docker run --rm -v `pwd`:/project -w /project quay.io/pypa/manylinux2014_i686 bash docker/buildwheel.sh
docker run --rm -v `pwd`:/project -w /project quay.io/pypa/manylinux2014_x86_64 bash docker/buildwheel.sh
.PHONY: linux-arm64-wheel
linux-arm64-wheel:
docker run --rm -v `pwd`:/project -w /project quay.io/pypa/manylinux2014_aarch64 bash docker/buildwheel.sh

242
README.md Normal file
View file

@ -0,0 +1,242 @@
# MessagePack for Python
[![Build Status](https://github.com/msgpack/msgpack-python/actions/workflows/wheel.yml/badge.svg)](https://github.com/msgpack/msgpack-python/actions/workflows/wheel.yml)
[![Documentation Status](https://readthedocs.org/projects/msgpack-python/badge/?version=latest)](https://msgpack-python.readthedocs.io/en/latest/?badge=latest)
## What is this?
[MessagePack](https://msgpack.org/) is an efficient binary serialization format.
It lets you exchange data among multiple languages like JSON.
But it's faster and smaller.
This package provides CPython bindings for reading and writing MessagePack data.
## Install
```
$ pip install msgpack
```
### Pure Python implementation
The extension module in msgpack (`msgpack._cmsgpack`) does not support PyPy.
But msgpack provides a pure Python implementation (`msgpack.fallback`) for PyPy.
### Windows
If you can't use a binary distribution, you need to install Visual Studio
or the Windows SDK on Windows.
Without the extension, the pure Python implementation on CPython runs slowly.
## How to use
### One-shot pack & unpack
Use `packb` for packing and `unpackb` for unpacking.
msgpack provides `dumps` and `loads` as aliases for compatibility with
`json` and `pickle`.
`pack` and `dump` pack to a file-like object.
`unpack` and `load` unpack from a file-like object.
```pycon
>>> import msgpack
>>> msgpack.packb([1, 2, 3])
'\x93\x01\x02\x03'
>>> msgpack.unpackb(_)
[1, 2, 3]
```
Read the docstring for options.
### Streaming unpacking
`Unpacker` is a "streaming unpacker". It unpacks multiple objects from one
stream (or from bytes provided through its `feed` method).
```py
import msgpack
from io import BytesIO
buf = BytesIO()
for i in range(100):
buf.write(msgpack.packb(i))
buf.seek(0)
unpacker = msgpack.Unpacker(buf)
for unpacked in unpacker:
print(unpacked)
```
### Packing/unpacking of custom data types
It is also possible to pack/unpack custom data types. Here is an example for
`datetime.datetime`.
```py
import datetime
import msgpack
useful_dict = {
"id": 1,
"created": datetime.datetime.now(),
}
def decode_datetime(obj):
if '__datetime__' in obj:
obj = datetime.datetime.strptime(obj["as_str"], "%Y%m%dT%H:%M:%S.%f")
return obj
def encode_datetime(obj):
if isinstance(obj, datetime.datetime):
return {'__datetime__': True, 'as_str': obj.strftime("%Y%m%dT%H:%M:%S.%f")}
return obj
packed_dict = msgpack.packb(useful_dict, default=encode_datetime)
this_dict_again = msgpack.unpackb(packed_dict, object_hook=decode_datetime)
```
`Unpacker`'s `object_hook` callback receives a dict; the
`object_pairs_hook` callback may instead be used to receive a list of
key-value pairs.
NOTE: msgpack can encode datetime with tzinfo into standard ext type for now.
See `datetime` option in `Packer` docstring.
### Extended types
It is also possible to pack/unpack custom data types using the **ext** type.
```pycon
>>> import msgpack
>>> import array
>>> def default(obj):
... if isinstance(obj, array.array) and obj.typecode == 'd':
... return msgpack.ExtType(42, obj.tostring())
... raise TypeError("Unknown type: %r" % (obj,))
...
>>> def ext_hook(code, data):
... if code == 42:
... a = array.array('d')
... a.fromstring(data)
... return a
... return ExtType(code, data)
...
>>> data = array.array('d', [1.2, 3.4])
>>> packed = msgpack.packb(data, default=default)
>>> unpacked = msgpack.unpackb(packed, ext_hook=ext_hook)
>>> data == unpacked
True
```
### Advanced unpacking control
As an alternative to iteration, `Unpacker` objects provide `unpack`,
`skip`, `read_array_header`, and `read_map_header` methods. The former two
read an entire message from the stream, respectively deserializing and returning
the result, or ignoring it. The latter two methods return the number of elements
in the upcoming container, so that each element in an array, or key-value pair
in a map, can be unpacked or skipped individually.
## Notes
### String and binary types in the old MessagePack spec
Early versions of msgpack didn't distinguish string and binary types.
The type for representing both string and binary types was named **raw**.
You can pack into and unpack from this old spec using `use_bin_type=False`
and `raw=True` options.
```pycon
>>> import msgpack
>>> msgpack.unpackb(msgpack.packb([b'spam', 'eggs'], use_bin_type=False), raw=True)
[b'spam', b'eggs']
>>> msgpack.unpackb(msgpack.packb([b'spam', 'eggs'], use_bin_type=True), raw=False)
[b'spam', 'eggs']
```
### ext type
To use the **ext** type, pass a `msgpack.ExtType` object to the packer.
```pycon
>>> import msgpack
>>> packed = msgpack.packb(msgpack.ExtType(42, b'xyzzy'))
>>> msgpack.unpackb(packed)
ExtType(code=42, data='xyzzy')
```
You can use it with `default` and `ext_hook`. See below.
### Security
When unpacking data received from an unreliable source, msgpack provides
two security options.
`max_buffer_size` (default: `100*1024*1024`) limits the internal buffer size.
It is also used to limit preallocated list sizes.
`strict_map_key` (default: `True`) limits the type of map keys to bytes and str.
While the MessagePack spec doesn't limit map key types,
there is a risk of a hash DoS.
If you need to support other types for map keys, use `strict_map_key=False`.
### Performance tips
CPython's GC starts when the number of allocated objects grows.
This means unpacking may trigger unnecessary GC.
You can use `gc.disable()` when unpacking a large message.
A list is the default sequence type in Python.
However, a tuple is lighter than a list.
You can use `use_list=False` while unpacking when performance is important.
## Major breaking changes in the history
### msgpack 0.5
The package name on PyPI was changed from `msgpack-python` to `msgpack` in 0.5.
When upgrading from msgpack-0.4 or earlier, do `pip uninstall msgpack-python` before
`pip install -U msgpack`.
### msgpack 1.0
* Python 2 support
* The extension module no longer supports Python 2.
The pure Python implementation (`msgpack.fallback`) is used for Python 2.
* msgpack 1.0.6 drops official support of Python 2.7, as pip and
GitHub Action "setup-python" no longer supports Python 2.7.
* Packer
* Packer uses `use_bin_type=True` by default.
Bytes are encoded in the bin type in MessagePack.
* The `encoding` option is removed. UTF-8 is always used.
* Unpacker
* Unpacker uses `raw=False` by default. It assumes str values are valid UTF-8 strings
and decodes them to Python str (Unicode) objects.
* `encoding` option is removed. You can use `raw=True` to support old format (e.g. unpack into bytes, not str).
* The default value of `max_buffer_size` is changed from 0 to 100 MiB to avoid DoS attacks.
You need to pass `max_buffer_size=0` if you have large but safe data.
* The default value of `strict_map_key` is changed to True to avoid hash DoS.
You need to pass `strict_map_key=False` if you have data that contain map keys
whose type is neither bytes nor str.

View file

@ -1,43 +0,0 @@
===========================
MessagePack Python Binding
===========================
:author: INADA Naoki
:version: 0.1.0
:date: 2009-07-12
HOW TO USE
-----------
You can read document in docstring after `import msgpack`
INSTALL
---------
Cython_ is required to build msgpack.
.. _Cython: http://www.cython.org/
posix
''''''
You can install msgpack in common way.
$ python setup.py install
Windows
''''''''
MessagePack requires gcc currently. So you need to prepare
MinGW GCC.
$ python setup.py build -c mingw32
$ python setup.py install
TEST
----
MessagePack uses `nosetest` for testing.
Run test with following command:
$ nosetests test
..
vim: filetype=rst

5
SECURITY.md Normal file
View file

@ -0,0 +1,5 @@
## Security contact information
To report a security vulnerability, please use the
[Tidelift security contact](https://tidelift.com/security).
Tidelift will coordinate the fix and disclosure.

38
benchmark/benchmark.py Normal file
View file

@ -0,0 +1,38 @@
from msgpack import fallback
try:
from msgpack import _cmsgpack
has_ext = True
except ImportError:
has_ext = False
import timeit
def profile(name, func):
times = timeit.repeat(func, number=1000, repeat=4)
times = ", ".join(["%8f" % t for t in times])
print("%-30s %40s" % (name, times))
def simple(name, data):
if has_ext:
packer = _cmsgpack.Packer()
profile("packing %s (ext)" % name, lambda: packer.pack(data))
packer = fallback.Packer()
profile("packing %s (fallback)" % name, lambda: packer.pack(data))
data = packer.pack(data)
if has_ext:
profile("unpacking %s (ext)" % name, lambda: _cmsgpack.unpackb(data))
profile("unpacking %s (fallback)" % name, lambda: fallback.unpackb(data))
def main():
simple("integers", [7] * 10000)
simple("bytes", [b"x" * n for n in range(100)] * 10)
simple("lists", [[]] * 10000)
simple("dicts", [{}] * 10000)
main()

22
docker/buildwheel.sh Normal file
View file

@ -0,0 +1,22 @@
#!/bin/bash
DOCKER_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
source "$DOCKER_DIR/shared.env"
set -e -x
ARCH=`uname -p`
echo "arch=$ARCH"
ls /opt/python
for V in "${PYTHON_VERSIONS[@]}"; do
PYBIN=/opt/python/$V/bin
rm -rf build/ # Avoid lib build by narrow Python is used by wide python
$PYBIN/python -m build -w
done
cd dist
for whl in *.whl; do
auditwheel repair "$whl"
rm "$whl"
done

17
docker/runtests.sh Executable file
View file

@ -0,0 +1,17 @@
#!/bin/bash
DOCKER_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
source "$DOCKER_DIR/shared.env"
set -e -x
for V in "${PYTHON_VERSIONS[@]}"; do
PYBIN=/opt/python/$V/bin
$PYBIN/python setup.py install
rm -rf build/ # Avoid lib build by narrow Python is used by wide python
$PYBIN/pip install pytest
pushd test # prevent importing msgpack package in current directory.
$PYBIN/python -c 'import sys; print(hex(sys.maxsize))'
$PYBIN/python -c 'from msgpack import _cmsgpack' # Ensure extension is available
$PYBIN/pytest -v .
popd
done

7
docker/shared.env Normal file
View file

@ -0,0 +1,7 @@
PYTHON_VERSIONS=(
cp310-cp310
cp39-cp39
cp38-cp38
cp37-cp37m
cp36-cp36m
)

159
docs/Makefile Normal file
View file

@ -0,0 +1,159 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -E -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/msgpack.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/msgpack.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/msgpack"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/msgpack"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
serve: html
python3 -m http.server -d _build/html
zip: html
cd _build/html && zip -r ../../../msgpack-doc.zip .

1
docs/_static/README.txt vendored Normal file
View file

@ -0,0 +1 @@
Sphinx will copy the contents of docs/_static/ directory to the build location.

32
docs/advanced.rst Normal file
View file

@ -0,0 +1,32 @@
Advanced usage
===============
Packer
------
autoreset
~~~~~~~~~
When you used ``autoreset=False`` option of :class:`~msgpack.Packer`,
``pack()`` method doesn't return packed ``bytes``.
You can use :meth:`~msgpack.Packer.bytes` or :meth:`~msgpack.Packer.getbuffer` to
get packed data.
``bytes()`` returns ``bytes`` object. ``getbuffer()`` returns some bytes-like
object. It's concrete type is implement detail and it will be changed in future
versions.
You can reduce temporary bytes object by using ``Unpacker.getbuffer()``.
.. code-block:: python
packer = Packer(use_bin_type=True, autoreset=False)
packer.pack([1, 2])
packer.pack([3, 4])
with open('data.bin', 'wb') as f:
f.write(packer.getbuffer())
packer.reset() # reset internal buffer

43
docs/api.rst Normal file
View file

@ -0,0 +1,43 @@
API reference
=============
.. module:: msgpack
.. autofunction:: pack
``dump()`` is an alias for :func:`pack`
.. autofunction:: packb
``dumps()`` is an alias for :func:`packb`
.. autofunction:: unpack
``load()`` is an alias for :func:`unpack`
.. autofunction:: unpackb
``loads()`` is an alias for :func:`unpackb`
.. autoclass:: Packer
:members:
.. autoclass:: Unpacker
:members:
.. autoclass:: ExtType
.. autoclass:: Timestamp
:members:
:special-members: __init__
exceptions
----------
These exceptions are accessible via `msgpack` package.
(For example, `msgpack.OutOfData` is shortcut for `msgpack.exceptions.OutOfData`)
.. automodule:: msgpack.exceptions
:members:
:undoc-members:
:show-inheritance:

283
docs/conf.py Normal file
View file

@ -0,0 +1,283 @@
# msgpack documentation build configuration file, created by
# sphinx-quickstart on Sun Feb 24 14:20:50 2013.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# import os
# import sys
# sys.path.insert(0, os.path.abspath('..'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ["sphinx.ext.autodoc", "sphinx.ext.viewcode"]
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
# The suffix of source filenames.
source_suffix = ".rst"
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = "index"
# General information about the project.
project = "msgpack"
copyright = "Inada Naoki"
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
# The full version, including alpha/beta/rc tags.
version = release = "1.0"
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
today_fmt = "%Y-%m-%d"
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ["_build"]
# The reST default role (used for this markup: `text`) to use for all documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = "sphinx_rtd_theme"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = "msgpackdoc"
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
("index", "msgpack.tex", "msgpack Documentation", "Author", "manual"),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [("index", "msgpack", "msgpack Documentation", ["Author"], 1)]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(
"index",
"msgpack",
"msgpack Documentation",
"Author",
"msgpack",
"One line description of project.",
"Miscellaneous",
),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# -- Options for Epub output ---------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = "msgpack"
epub_author = "Author"
epub_publisher = "Author"
epub_copyright = "2013, Author"
# The language of the text. It defaults to the language option
# or en if the language is not set.
# epub_language = ''
# The scheme of the identifier. Typical schemes are ISBN or URL.
# epub_scheme = ''
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
# epub_identifier = ''
# A unique identification for the text.
# epub_uid = ''
# A tuple containing the cover image and cover page html template filenames.
# epub_cover = ()
# HTML files that should be inserted before the pages created by sphinx.
# The format is a list of tuples containing the path and title.
# epub_pre_files = []
# HTML files shat should be inserted after the pages created by sphinx.
# The format is a list of tuples containing the path and title.
# epub_post_files = []
# A list of files that should not be packed into the epub file.
# epub_exclude_files = []
# The depth of the table of contents in toc.ncx.
# epub_tocdepth = 3
# Allow duplicate toc entries.
# epub_tocdup = True

11
docs/index.rst Normal file
View file

@ -0,0 +1,11 @@
msgpack document
================
`MessagePack <http://msgpack.org>`_ is a efficient format for inter
language data exchange.
.. toctree::
:maxdepth: 1
api
advanced

2
docs/requirements.txt Normal file
View file

@ -0,0 +1,2 @@
sphinx~=7.3.7
sphinx-rtd-theme~=2.0.0

View file

@ -1,6 +1,51 @@
# coding: utf-8
from msgpack.__version__ import *
from msgpack._msgpack import *
# ruff: noqa: F401
import os
from .exceptions import * # noqa: F403
from .ext import ExtType, Timestamp
version = (1, 1, 2)
__version__ = "1.1.2"
if os.environ.get("MSGPACK_PUREPYTHON"):
from .fallback import Packer, Unpacker, unpackb
else:
try:
from ._cmsgpack import Packer, Unpacker, unpackb
except ImportError:
from .fallback import Packer, Unpacker, unpackb
def pack(o, stream, **kwargs):
"""
Pack object `o` and write it to `stream`
See :class:`Packer` for options.
"""
packer = Packer(**kwargs)
stream.write(packer.pack(o))
def packb(o, **kwargs):
"""
Pack object `o` and return packed bytes
See :class:`Packer` for options.
"""
return Packer(**kwargs).pack(o)
def unpack(stream, **kwargs):
"""
Unpack an object from `stream`.
Raises `ExtraData` when `stream` contains extra bytes.
See :class:`Unpacker` for options.
"""
data = stream.read()
return unpackb(data, **kwargs)
# alias for compatibility to simplejson/marshal/pickle.
load = unpack
@ -8,13 +53,3 @@ loads = unpackb
dump = pack
dumps = packb
def packs(*args, **kw):
from warnings import warn
warn("msgpack.packs() is deprecated. Use packb() instead.", DeprecationWarning)
return packb(*args, **kw)
def unpacks(*args, **kw):
from warnings import warn
warn("msgpack.unpacks() is deprecated. Use unpackb() instead.", DeprecationWarning)
return unpackb(*args, **kw)

12
msgpack/_cmsgpack.pyx Normal file
View file

@ -0,0 +1,12 @@
#cython: embedsignature=True, c_string_encoding=ascii, language_level=3
#cython: freethreading_compatible = True
import cython
from cpython.datetime cimport import_datetime, datetime_new
import_datetime()
import datetime
cdef object utc = datetime.timezone.utc
cdef object epoch = datetime_new(1970, 1, 1, 0, 0, 0, 0, tz=utc)
include "_packer.pyx"
include "_unpacker.pyx"

View file

@ -1,427 +0,0 @@
# coding: utf-8
#cython: embedsignature=True
from cpython cimport *
cdef extern from "Python.h":
ctypedef char* const_char_ptr "const char*"
ctypedef char* const_void_ptr "const void*"
ctypedef struct PyObject
cdef int PyObject_AsReadBuffer(object o, const_void_ptr* buff, Py_ssize_t* buf_len) except -1
from libc.stdlib cimport *
from libc.string cimport *
import gc
_gc_disable = gc.disable
_gc_enable = gc.enable
cdef extern from "pack.h":
struct msgpack_packer:
char* buf
size_t length
size_t buf_size
int msgpack_pack_int(msgpack_packer* pk, int d)
int msgpack_pack_nil(msgpack_packer* pk)
int msgpack_pack_true(msgpack_packer* pk)
int msgpack_pack_false(msgpack_packer* pk)
int msgpack_pack_long(msgpack_packer* pk, long d)
int msgpack_pack_long_long(msgpack_packer* pk, long long d)
int msgpack_pack_unsigned_long_long(msgpack_packer* pk, unsigned long long d)
int msgpack_pack_double(msgpack_packer* pk, double d)
int msgpack_pack_array(msgpack_packer* pk, size_t l)
int msgpack_pack_map(msgpack_packer* pk, size_t l)
int msgpack_pack_raw(msgpack_packer* pk, size_t l)
int msgpack_pack_raw_body(msgpack_packer* pk, char* body, size_t l)
cdef int DEFAULT_RECURSE_LIMIT=511
cdef class Packer(object):
"""MessagePack Packer
usage:
packer = Packer()
astream.write(packer.pack(a))
astream.write(packer.pack(b))
"""
cdef msgpack_packer pk
cdef object _default
cdef object _bencoding
cdef object _berrors
cdef char *encoding
cdef char *unicode_errors
def __cinit__(self):
cdef int buf_size = 1024*1024
self.pk.buf = <char*> malloc(buf_size);
if self.pk.buf == NULL:
raise MemoryError("Unable to allocate internal buffer.")
self.pk.buf_size = buf_size
self.pk.length = 0
def __init__(self, default=None, encoding='utf-8', unicode_errors='strict'):
if default is not None:
if not PyCallable_Check(default):
raise TypeError("default must be a callable.")
self._default = default
if encoding is None:
self.encoding = NULL
self.unicode_errors = NULL
else:
if isinstance(encoding, unicode):
self._bencoding = encoding.encode('ascii')
else:
self._bencoding = encoding
self.encoding = PyBytes_AsString(self._bencoding)
if isinstance(unicode_errors, unicode):
self._berrors = unicode_errors.encode('ascii')
else:
self._berrors = unicode_errors
self.unicode_errors = PyBytes_AsString(self._berrors)
def __dealloc__(self):
free(self.pk.buf);
cdef int _pack(self, object o, int nest_limit=DEFAULT_RECURSE_LIMIT) except -1:
cdef long long llval
cdef unsigned long long ullval
cdef long longval
cdef double fval
cdef char* rawval
cdef int ret
cdef dict d
if nest_limit < 0:
raise ValueError("Too deep.")
if o is None:
ret = msgpack_pack_nil(&self.pk)
elif isinstance(o, bool):
if o:
ret = msgpack_pack_true(&self.pk)
else:
ret = msgpack_pack_false(&self.pk)
elif PyLong_Check(o):
if o > 0:
ullval = o
ret = msgpack_pack_unsigned_long_long(&self.pk, ullval)
else:
llval = o
ret = msgpack_pack_long_long(&self.pk, llval)
elif PyInt_Check(o):
longval = o
ret = msgpack_pack_long(&self.pk, longval)
elif PyFloat_Check(o):
fval = o
ret = msgpack_pack_double(&self.pk, fval)
elif PyBytes_Check(o):
rawval = o
ret = msgpack_pack_raw(&self.pk, len(o))
if ret == 0:
ret = msgpack_pack_raw_body(&self.pk, rawval, len(o))
elif PyUnicode_Check(o):
if not self.encoding:
raise TypeError("Can't encode utf-8 no encoding is specified")
o = PyUnicode_AsEncodedString(o, self.encoding, self.unicode_errors)
rawval = o
ret = msgpack_pack_raw(&self.pk, len(o))
if ret == 0:
ret = msgpack_pack_raw_body(&self.pk, rawval, len(o))
elif PyDict_Check(o):
d = o
ret = msgpack_pack_map(&self.pk, len(d))
if ret == 0:
for k,v in d.items():
ret = self._pack(k, nest_limit-1)
if ret != 0: break
ret = self._pack(v, nest_limit-1)
if ret != 0: break
elif PySequence_Check(o):
ret = msgpack_pack_array(&self.pk, len(o))
if ret == 0:
for v in o:
ret = self._pack(v, nest_limit-1)
if ret != 0: break
elif self._default:
o = self._default(o)
ret = self._pack(o, nest_limit-1)
else:
raise TypeError("can't serialize %r" % (o,))
return ret
def pack(self, object obj):
cdef int ret
ret = self._pack(obj, DEFAULT_RECURSE_LIMIT)
if ret:
raise TypeError
buf = PyBytes_FromStringAndSize(self.pk.buf, self.pk.length)
self.pk.length = 0
return buf
def pack(object o, object stream, default=None, encoding='utf-8', unicode_errors='strict'):
"""
pack an object `o` and write it to stream)."""
packer = Packer(default=default, encoding=encoding, unicode_errors=unicode_errors)
stream.write(packer.pack(o))
def packb(object o, default=None, encoding='utf-8', unicode_errors='strict'):
"""
pack o and return packed bytes."""
packer = Packer(default=default, encoding=encoding, unicode_errors=unicode_errors)
return packer.pack(o)
cdef extern from "unpack.h":
ctypedef struct msgpack_user:
int use_list
PyObject* object_hook
PyObject* list_hook
char *encoding
char *unicode_errors
ctypedef struct template_context:
msgpack_user user
PyObject* obj
size_t count
unsigned int ct
PyObject* key
int template_execute(template_context* ctx, const_char_ptr data,
size_t len, size_t* off) except -1
void template_init(template_context* ctx)
object template_data(template_context* ctx)
def unpackb(object packed, object object_hook=None, object list_hook=None, bint use_list=0, encoding=None, unicode_errors="strict"):
"""
Unpack packed_bytes to object. Returns an unpacked object."""
cdef template_context ctx
cdef size_t off = 0
cdef int ret
cdef char* buf
cdef Py_ssize_t buf_len
PyObject_AsReadBuffer(packed, <const_void_ptr*>&buf, &buf_len)
if encoding is None:
enc = NULL
err = NULL
else:
if isinstance(encoding, unicode):
bencoding = encoding.encode('ascii')
else:
bencoding = encoding
if isinstance(unicode_errors, unicode):
berrors = unicode_errors.encode('ascii')
else:
berrors = unicode_errors
enc = PyBytes_AsString(bencoding)
err = PyBytes_AsString(berrors)
template_init(&ctx)
ctx.user.use_list = use_list
ctx.user.object_hook = ctx.user.list_hook = NULL
ctx.user.encoding = enc
ctx.user.unicode_errors = err
if object_hook is not None:
if not PyCallable_Check(object_hook):
raise TypeError("object_hook must be a callable.")
ctx.user.object_hook = <PyObject*>object_hook
if list_hook is not None:
if not PyCallable_Check(list_hook):
raise TypeError("list_hook must be a callable.")
ctx.user.list_hook = <PyObject*>list_hook
_gc_disable()
try:
ret = template_execute(&ctx, buf, buf_len, &off)
finally:
_gc_enable()
if ret == 1:
return template_data(&ctx)
else:
return None
def unpack(object stream, object object_hook=None, object list_hook=None, bint use_list=0, encoding=None, unicode_errors="strict"):
"""
unpack an object from stream.
"""
return unpackb(stream.read(), use_list=use_list,
object_hook=object_hook, list_hook=list_hook, encoding=encoding, unicode_errors=unicode_errors)
cdef class Unpacker(object):
"""
Streaming unpacker.
read_size is used like file_like.read(read_size)
`file_like` is a file-like object having `.read(n)` method.
When `Unpacker` initialized with `file_like`, unpacker reads serialized data
from it and `.feed()` method is not usable.
`read_size` is used as `file_like.read(read_size)`. (default: 1M)
If `use_list` is true, msgpack list is deserialized to Python list.
Otherwise, it is deserialized to Python tuple. (default: False)
`object_hook` is same to simplejson. If it is not None, it should be callable
and Unpacker calls it when deserializing key-value.
`encoding` is encoding used for decoding msgpack bytes. If it is None (default),
msgpack bytes is deserialized to Python bytes.
`unicode_errors` is used for decoding bytes.
example::
unpacker = Unpacker()
while 1:
buf = astream.read()
unpacker.feed(buf)
for o in unpacker:
do_something(o)
"""
cdef template_context ctx
cdef char* buf
cdef size_t buf_size, buf_head, buf_tail
cdef object file_like
cdef object file_like_read
cdef Py_ssize_t read_size
cdef bint use_list
cdef object object_hook
cdef object _bencoding
cdef object _berrors
cdef char *encoding
cdef char *unicode_errors
def __cinit__(self):
self.buf = NULL
def __dealloc__(self):
free(self.buf)
self.buf = NULL
def __init__(self, file_like=None, Py_ssize_t read_size=1024*1024, bint use_list=0,
object object_hook=None, object list_hook=None,
encoding=None, unicode_errors='strict'):
self.use_list = use_list
self.file_like = file_like
if file_like:
self.file_like_read = file_like.read
if not PyCallable_Check(self.file_like_read):
raise ValueError("`file_like.read` must be a callable.")
self.read_size = read_size
self.buf = <char*>malloc(read_size)
if self.buf == NULL:
raise MemoryError("Unable to allocate internal buffer.")
self.buf_size = read_size
self.buf_head = 0
self.buf_tail = 0
template_init(&self.ctx)
self.ctx.user.use_list = use_list
self.ctx.user.object_hook = self.ctx.user.list_hook = <PyObject*>NULL
if object_hook is not None:
if not PyCallable_Check(object_hook):
raise TypeError("object_hook must be a callable.")
self.ctx.user.object_hook = <PyObject*>object_hook
if list_hook is not None:
if not PyCallable_Check(list_hook):
raise TypeError("list_hook must be a callable.")
self.ctx.user.list_hook = <PyObject*>list_hook
if encoding is None:
self.ctx.user.encoding = NULL
self.ctx.user.unicode_errors = NULL
else:
if isinstance(encoding, unicode):
self._bencoding = encoding.encode('ascii')
else:
self._bencoding = encoding
self.ctx.user.encoding = PyBytes_AsString(self._bencoding)
if isinstance(unicode_errors, unicode):
self._berrors = unicode_errors.encode('ascii')
else:
self._berrors = unicode_errors
self.ctx.user.unicode_errors = PyBytes_AsString(self._berrors)
def feed(self, object next_bytes):
cdef char* buf
cdef Py_ssize_t buf_len
if self.file_like is not None:
raise AssertionError(
"unpacker.feed() is not be able to use with`file_like`.")
PyObject_AsReadBuffer(next_bytes, <const_void_ptr*>&buf, &buf_len)
self.append_buffer(buf, buf_len)
cdef append_buffer(self, void* _buf, Py_ssize_t _buf_len):
cdef:
char* buf = self.buf
size_t head = self.buf_head
size_t tail = self.buf_tail
size_t buf_size = self.buf_size
size_t new_size
if tail + _buf_len > buf_size:
if ((tail - head) + _buf_len)*2 < buf_size:
# move to front.
memmove(buf, buf + head, tail - head)
tail -= head
head = 0
else:
# expand buffer.
new_size = tail + _buf_len
if new_size < buf_size*2:
new_size = buf_size*2
buf = <char*>realloc(buf, new_size)
if buf == NULL:
# self.buf still holds old buffer and will be freed during
# obj destruction
raise MemoryError("Unable to enlarge internal buffer.")
buf_size = new_size
memcpy(buf + tail, <char*>(_buf), _buf_len)
self.buf = buf
self.buf_head = head
self.buf_size = buf_size
self.buf_tail = tail + _buf_len
# prepare self.buf from file_like
cdef fill_buffer(self):
if self.file_like is not None:
next_bytes = self.file_like_read(self.read_size)
if next_bytes:
self.append_buffer(PyBytes_AsString(next_bytes),
PyBytes_Size(next_bytes))
else:
self.file_like = None
cpdef unpack(self):
"""unpack one object"""
cdef int ret
while 1:
_gc_disable()
ret = template_execute(&self.ctx, self.buf, self.buf_tail, &self.buf_head)
_gc_enable()
if ret == 1:
o = template_data(&self.ctx)
template_init(&self.ctx)
return o
elif ret == 0:
if self.file_like is not None:
self.fill_buffer()
continue
raise StopIteration("No more unpack data.")
else:
raise ValueError("Unpack failed: error = %d" % (ret,))
def __iter__(self):
return self
def __next__(self):
return self.unpack()
# for debug.
#def _buf(self):
# return PyString_FromStringAndSize(self.buf, self.buf_tail)
#def _off(self):
# return self.buf_head

364
msgpack/_packer.pyx Normal file
View file

@ -0,0 +1,364 @@
from cpython cimport *
from cpython.bytearray cimport PyByteArray_Check, PyByteArray_CheckExact
from cpython.datetime cimport (
PyDateTime_CheckExact, PyDelta_CheckExact,
datetime_tzinfo, timedelta_days, timedelta_seconds, timedelta_microseconds,
)
cdef ExtType
cdef Timestamp
from .ext import ExtType, Timestamp
cdef extern from "Python.h":
int PyMemoryView_Check(object obj)
cdef extern from "pack.h":
struct msgpack_packer:
char* buf
size_t length
size_t buf_size
bint use_bin_type
int msgpack_pack_nil(msgpack_packer* pk) except -1
int msgpack_pack_true(msgpack_packer* pk) except -1
int msgpack_pack_false(msgpack_packer* pk) except -1
int msgpack_pack_long_long(msgpack_packer* pk, long long d) except -1
int msgpack_pack_unsigned_long_long(msgpack_packer* pk, unsigned long long d) except -1
int msgpack_pack_float(msgpack_packer* pk, float d) except -1
int msgpack_pack_double(msgpack_packer* pk, double d) except -1
int msgpack_pack_array(msgpack_packer* pk, size_t l) except -1
int msgpack_pack_map(msgpack_packer* pk, size_t l) except -1
int msgpack_pack_raw(msgpack_packer* pk, size_t l) except -1
int msgpack_pack_bin(msgpack_packer* pk, size_t l) except -1
int msgpack_pack_raw_body(msgpack_packer* pk, char* body, size_t l) except -1
int msgpack_pack_ext(msgpack_packer* pk, char typecode, size_t l) except -1
int msgpack_pack_timestamp(msgpack_packer* x, long long seconds, unsigned long nanoseconds) except -1
cdef int DEFAULT_RECURSE_LIMIT=511
cdef long long ITEM_LIMIT = (2**32)-1
cdef inline int PyBytesLike_Check(object o):
return PyBytes_Check(o) or PyByteArray_Check(o)
cdef inline int PyBytesLike_CheckExact(object o):
return PyBytes_CheckExact(o) or PyByteArray_CheckExact(o)
cdef class Packer:
"""
MessagePack Packer
Usage::
packer = Packer()
astream.write(packer.pack(a))
astream.write(packer.pack(b))
Packer's constructor has some keyword arguments:
:param default:
When specified, it should be callable.
Convert user type to builtin type that Packer supports.
See also simplejson's document.
:param bool use_single_float:
Use single precision float type for float. (default: False)
:param bool autoreset:
Reset buffer after each pack and return its content as `bytes`. (default: True).
If set this to false, use `bytes()` to get content and `.reset()` to clear buffer.
:param bool use_bin_type:
Use bin type introduced in msgpack spec 2.0 for bytes.
It also enables str8 type for unicode. (default: True)
:param bool strict_types:
If set to true, types will be checked to be exact. Derived classes
from serializeable types will not be serialized and will be
treated as unsupported type and forwarded to default.
Additionally tuples will not be serialized as lists.
This is useful when trying to implement accurate serialization
for python types.
:param bool datetime:
If set to true, datetime with tzinfo is packed into Timestamp type.
Note that the tzinfo is stripped in the timestamp.
You can get UTC datetime with `timestamp=3` option of the Unpacker.
:param str unicode_errors:
The error handler for encoding unicode. (default: 'strict')
DO NOT USE THIS!! This option is kept for very specific usage.
:param int buf_size:
The size of the internal buffer. (default: 256*1024)
Useful if serialisation size can be correctly estimated,
avoid unnecessary reallocations.
"""
cdef msgpack_packer pk
cdef object _default
cdef object _berrors
cdef const char *unicode_errors
cdef size_t exports # number of exported buffers
cdef bint strict_types
cdef bint use_float
cdef bint autoreset
cdef bint datetime
def __cinit__(self, buf_size=256*1024, **_kwargs):
self.pk.buf = <char*> PyMem_Malloc(buf_size)
if self.pk.buf == NULL:
raise MemoryError("Unable to allocate internal buffer.")
self.pk.buf_size = buf_size
self.pk.length = 0
self.exports = 0
def __dealloc__(self):
PyMem_Free(self.pk.buf)
self.pk.buf = NULL
assert self.exports == 0
cdef _check_exports(self):
if self.exports > 0:
raise BufferError("Existing exports of data: Packer cannot be changed")
@cython.critical_section
def __init__(self, *, default=None,
bint use_single_float=False, bint autoreset=True, bint use_bin_type=True,
bint strict_types=False, bint datetime=False, unicode_errors=None,
buf_size=256*1024):
self.use_float = use_single_float
self.strict_types = strict_types
self.autoreset = autoreset
self.datetime = datetime
self.pk.use_bin_type = use_bin_type
if default is not None:
if not PyCallable_Check(default):
raise TypeError("default must be a callable.")
self._default = default
self._berrors = unicode_errors
if unicode_errors is None:
self.unicode_errors = NULL
else:
self.unicode_errors = self._berrors
# returns -2 when default should(o) be called
cdef int _pack_inner(self, object o, bint will_default, int nest_limit) except -1:
cdef long long llval
cdef unsigned long long ullval
cdef unsigned long ulval
cdef const char* rawval
cdef Py_ssize_t L
cdef Py_buffer view
cdef bint strict = self.strict_types
if o is None:
msgpack_pack_nil(&self.pk)
elif o is True:
msgpack_pack_true(&self.pk)
elif o is False:
msgpack_pack_false(&self.pk)
elif PyLong_CheckExact(o) if strict else PyLong_Check(o):
try:
if o > 0:
ullval = o
msgpack_pack_unsigned_long_long(&self.pk, ullval)
else:
llval = o
msgpack_pack_long_long(&self.pk, llval)
except OverflowError as oe:
if will_default:
return -2
else:
raise OverflowError("Integer value out of range")
elif PyFloat_CheckExact(o) if strict else PyFloat_Check(o):
if self.use_float:
msgpack_pack_float(&self.pk, <float>o)
else:
msgpack_pack_double(&self.pk, <double>o)
elif PyBytesLike_CheckExact(o) if strict else PyBytesLike_Check(o):
L = Py_SIZE(o)
if L > ITEM_LIMIT:
PyErr_Format(ValueError, b"%.200s object is too large", Py_TYPE(o).tp_name)
rawval = o
msgpack_pack_bin(&self.pk, L)
msgpack_pack_raw_body(&self.pk, rawval, L)
elif PyUnicode_CheckExact(o) if strict else PyUnicode_Check(o):
if self.unicode_errors == NULL:
rawval = PyUnicode_AsUTF8AndSize(o, &L)
if L >ITEM_LIMIT:
raise ValueError("unicode string is too large")
else:
o = PyUnicode_AsEncodedString(o, NULL, self.unicode_errors)
L = Py_SIZE(o)
if L > ITEM_LIMIT:
raise ValueError("unicode string is too large")
rawval = o
msgpack_pack_raw(&self.pk, L)
msgpack_pack_raw_body(&self.pk, rawval, L)
elif PyDict_CheckExact(o) if strict else PyDict_Check(o):
L = len(o)
if L > ITEM_LIMIT:
raise ValueError("dict is too large")
msgpack_pack_map(&self.pk, L)
for k, v in o.items():
self._pack(k, nest_limit)
self._pack(v, nest_limit)
elif type(o) is ExtType if strict else isinstance(o, ExtType):
# This should be before Tuple because ExtType is namedtuple.
rawval = o.data
L = len(o.data)
if L > ITEM_LIMIT:
raise ValueError("EXT data is too large")
msgpack_pack_ext(&self.pk, <long>o.code, L)
msgpack_pack_raw_body(&self.pk, rawval, L)
elif type(o) is Timestamp:
llval = o.seconds
ulval = o.nanoseconds
msgpack_pack_timestamp(&self.pk, llval, ulval)
elif PyList_CheckExact(o) if strict else (PyTuple_Check(o) or PyList_Check(o)):
L = Py_SIZE(o)
if L > ITEM_LIMIT:
raise ValueError("list is too large")
msgpack_pack_array(&self.pk, L)
for v in o:
self._pack(v, nest_limit)
elif PyMemoryView_Check(o):
PyObject_GetBuffer(o, &view, PyBUF_SIMPLE)
L = view.len
if L > ITEM_LIMIT:
PyBuffer_Release(&view);
raise ValueError("memoryview is too large")
try:
msgpack_pack_bin(&self.pk, L)
msgpack_pack_raw_body(&self.pk, <char*>view.buf, L)
finally:
PyBuffer_Release(&view);
elif self.datetime and PyDateTime_CheckExact(o) and datetime_tzinfo(o) is not None:
delta = o - epoch
if not PyDelta_CheckExact(delta):
raise ValueError("failed to calculate delta")
llval = timedelta_days(delta) * <long long>(24*60*60) + timedelta_seconds(delta)
ulval = timedelta_microseconds(delta) * 1000
msgpack_pack_timestamp(&self.pk, llval, ulval)
elif will_default:
return -2
elif self.datetime and PyDateTime_CheckExact(o):
# this should be later than will_default
PyErr_Format(ValueError, b"can not serialize '%.200s' object where tzinfo=None", Py_TYPE(o).tp_name)
else:
PyErr_Format(TypeError, b"can not serialize '%.200s' object", Py_TYPE(o).tp_name)
cdef int _pack(self, object o, int nest_limit=DEFAULT_RECURSE_LIMIT) except -1:
cdef int ret
if nest_limit < 0:
raise ValueError("recursion limit exceeded.")
nest_limit -= 1
if self._default is not None:
ret = self._pack_inner(o, 1, nest_limit)
if ret == -2:
o = self._default(o)
else:
return ret
return self._pack_inner(o, 0, nest_limit)
@cython.critical_section
def pack(self, object obj):
cdef int ret
self._check_exports()
try:
ret = self._pack(obj, DEFAULT_RECURSE_LIMIT)
except:
self.pk.length = 0
raise
if ret: # should not happen.
raise RuntimeError("internal error")
if self.autoreset:
buf = PyBytes_FromStringAndSize(self.pk.buf, self.pk.length)
self.pk.length = 0
return buf
@cython.critical_section
def pack_ext_type(self, typecode, data):
self._check_exports()
if len(data) > ITEM_LIMIT:
raise ValueError("ext data too large")
msgpack_pack_ext(&self.pk, typecode, len(data))
msgpack_pack_raw_body(&self.pk, data, len(data))
@cython.critical_section
def pack_array_header(self, long long size):
self._check_exports()
if size > ITEM_LIMIT:
raise ValueError("array too large")
msgpack_pack_array(&self.pk, size)
if self.autoreset:
buf = PyBytes_FromStringAndSize(self.pk.buf, self.pk.length)
self.pk.length = 0
return buf
@cython.critical_section
def pack_map_header(self, long long size):
self._check_exports()
if size > ITEM_LIMIT:
raise ValueError("map too learge")
msgpack_pack_map(&self.pk, size)
if self.autoreset:
buf = PyBytes_FromStringAndSize(self.pk.buf, self.pk.length)
self.pk.length = 0
return buf
@cython.critical_section
def pack_map_pairs(self, object pairs):
"""
Pack *pairs* as msgpack map type.
*pairs* should be a sequence of pairs.
(`len(pairs)` and `for k, v in pairs:` should be supported.)
"""
self._check_exports()
size = len(pairs)
if size > ITEM_LIMIT:
raise ValueError("map too large")
msgpack_pack_map(&self.pk, size)
for k, v in pairs:
self._pack(k)
self._pack(v)
if self.autoreset:
buf = PyBytes_FromStringAndSize(self.pk.buf, self.pk.length)
self.pk.length = 0
return buf
@cython.critical_section
def reset(self):
"""Reset internal buffer.
This method is useful only when autoreset=False.
"""
self._check_exports()
self.pk.length = 0
@cython.critical_section
def bytes(self):
"""Return internal buffer contents as bytes object"""
return PyBytes_FromStringAndSize(self.pk.buf, self.pk.length)
def getbuffer(self):
"""Return memoryview of internal buffer.
Note: Packer now supports buffer protocol. You can use memoryview(packer).
"""
return memoryview(self)
def __getbuffer__(self, Py_buffer *buffer, int flags):
PyBuffer_FillInfo(buffer, self, self.pk.buf, self.pk.length, 1, flags)
self.exports += 1
def __releasebuffer__(self, Py_buffer *buffer):
self.exports -= 1

554
msgpack/_unpacker.pyx Normal file
View file

@ -0,0 +1,554 @@
from cpython cimport *
cdef extern from "Python.h":
ctypedef struct PyObject
object PyMemoryView_GetContiguous(object obj, int buffertype, char order)
from libc.stdlib cimport *
from libc.string cimport *
from libc.limits cimport *
from libc.stdint cimport uint64_t
from .exceptions import (
BufferFull,
OutOfData,
ExtraData,
FormatError,
StackError,
)
from .ext import ExtType, Timestamp
cdef object giga = 1_000_000_000
cdef extern from "unpack.h":
ctypedef struct msgpack_user:
bint use_list
bint raw
bint has_pairs_hook # call object_hook with k-v pairs
bint strict_map_key
int timestamp
PyObject* object_hook
PyObject* list_hook
PyObject* ext_hook
PyObject* timestamp_t
PyObject *giga;
PyObject *utc;
const char *unicode_errors
Py_ssize_t max_str_len
Py_ssize_t max_bin_len
Py_ssize_t max_array_len
Py_ssize_t max_map_len
Py_ssize_t max_ext_len
ctypedef struct unpack_context:
msgpack_user user
PyObject* obj
Py_ssize_t count
ctypedef int (*execute_fn)(unpack_context* ctx, const char* data,
Py_ssize_t len, Py_ssize_t* off) except? -1
execute_fn unpack_construct
execute_fn unpack_skip
execute_fn read_array_header
execute_fn read_map_header
void unpack_init(unpack_context* ctx)
object unpack_data(unpack_context* ctx)
void unpack_clear(unpack_context* ctx)
cdef inline init_ctx(unpack_context *ctx,
object object_hook, object object_pairs_hook,
object list_hook, object ext_hook,
bint use_list, bint raw, int timestamp,
bint strict_map_key,
const char* unicode_errors,
Py_ssize_t max_str_len, Py_ssize_t max_bin_len,
Py_ssize_t max_array_len, Py_ssize_t max_map_len,
Py_ssize_t max_ext_len):
unpack_init(ctx)
ctx.user.use_list = use_list
ctx.user.raw = raw
ctx.user.strict_map_key = strict_map_key
ctx.user.object_hook = ctx.user.list_hook = <PyObject*>NULL
ctx.user.max_str_len = max_str_len
ctx.user.max_bin_len = max_bin_len
ctx.user.max_array_len = max_array_len
ctx.user.max_map_len = max_map_len
ctx.user.max_ext_len = max_ext_len
if object_hook is not None and object_pairs_hook is not None:
raise TypeError("object_pairs_hook and object_hook are mutually exclusive.")
if object_hook is not None:
if not PyCallable_Check(object_hook):
raise TypeError("object_hook must be a callable.")
ctx.user.object_hook = <PyObject*>object_hook
if object_pairs_hook is None:
ctx.user.has_pairs_hook = False
else:
if not PyCallable_Check(object_pairs_hook):
raise TypeError("object_pairs_hook must be a callable.")
ctx.user.object_hook = <PyObject*>object_pairs_hook
ctx.user.has_pairs_hook = True
if list_hook is not None:
if not PyCallable_Check(list_hook):
raise TypeError("list_hook must be a callable.")
ctx.user.list_hook = <PyObject*>list_hook
if ext_hook is not None:
if not PyCallable_Check(ext_hook):
raise TypeError("ext_hook must be a callable.")
ctx.user.ext_hook = <PyObject*>ext_hook
if timestamp < 0 or 3 < timestamp:
raise ValueError("timestamp must be 0..3")
# Add Timestamp type to the user object so it may be used in unpack.h
ctx.user.timestamp = timestamp
ctx.user.timestamp_t = <PyObject*>Timestamp
ctx.user.giga = <PyObject*>giga
ctx.user.utc = <PyObject*>utc
ctx.user.unicode_errors = unicode_errors
def default_read_extended_type(typecode, data):
raise NotImplementedError("Cannot decode extended type with typecode=%d" % typecode)
cdef inline int get_data_from_buffer(object obj,
Py_buffer *view,
char **buf,
Py_ssize_t *buffer_len) except 0:
cdef object contiguous
cdef Py_buffer tmp
if PyObject_GetBuffer(obj, view, PyBUF_FULL_RO) == -1:
raise
if view.itemsize != 1:
PyBuffer_Release(view)
raise BufferError("cannot unpack from multi-byte object")
if PyBuffer_IsContiguous(view, b'A') == 0:
PyBuffer_Release(view)
# create a contiguous copy and get buffer
contiguous = PyMemoryView_GetContiguous(obj, PyBUF_READ, b'C')
PyObject_GetBuffer(contiguous, view, PyBUF_SIMPLE)
# view must hold the only reference to contiguous,
# so memory is freed when view is released
Py_DECREF(contiguous)
buffer_len[0] = view.len
buf[0] = <char*> view.buf
return 1
def unpackb(object packed, *, object object_hook=None, object list_hook=None,
bint use_list=True, bint raw=False, int timestamp=0, bint strict_map_key=True,
unicode_errors=None,
object_pairs_hook=None, ext_hook=ExtType,
Py_ssize_t max_str_len=-1,
Py_ssize_t max_bin_len=-1,
Py_ssize_t max_array_len=-1,
Py_ssize_t max_map_len=-1,
Py_ssize_t max_ext_len=-1):
"""
Unpack packed_bytes to object. Returns an unpacked object.
Raises ``ExtraData`` when *packed* contains extra bytes.
Raises ``ValueError`` when *packed* is incomplete.
Raises ``FormatError`` when *packed* is not valid msgpack.
Raises ``StackError`` when *packed* contains too nested.
Other exceptions can be raised during unpacking.
See :class:`Unpacker` for options.
*max_xxx_len* options are configured automatically from ``len(packed)``.
"""
cdef unpack_context ctx
cdef Py_ssize_t off = 0
cdef int ret
cdef Py_buffer view
cdef char* buf = NULL
cdef Py_ssize_t buf_len
cdef const char* cerr = NULL
if unicode_errors is not None:
cerr = unicode_errors
get_data_from_buffer(packed, &view, &buf, &buf_len)
if max_str_len == -1:
max_str_len = buf_len
if max_bin_len == -1:
max_bin_len = buf_len
if max_array_len == -1:
max_array_len = buf_len
if max_map_len == -1:
max_map_len = buf_len//2
if max_ext_len == -1:
max_ext_len = buf_len
try:
init_ctx(&ctx, object_hook, object_pairs_hook, list_hook, ext_hook,
use_list, raw, timestamp, strict_map_key, cerr,
max_str_len, max_bin_len, max_array_len, max_map_len, max_ext_len)
ret = unpack_construct(&ctx, buf, buf_len, &off)
finally:
PyBuffer_Release(&view);
if ret == 1:
obj = unpack_data(&ctx)
if off < buf_len:
raise ExtraData(obj, PyBytes_FromStringAndSize(buf+off, buf_len-off))
return obj
unpack_clear(&ctx)
if ret == 0:
raise ValueError("Unpack failed: incomplete input")
elif ret == -2:
raise FormatError
elif ret == -3:
raise StackError
raise ValueError("Unpack failed: error = %d" % (ret,))
cdef class Unpacker:
"""Streaming unpacker.
Arguments:
:param file_like:
File-like object having `.read(n)` method.
If specified, unpacker reads serialized data from it and `.feed()` is not usable.
:param int read_size:
Used as `file_like.read(read_size)`. (default: `min(16*1024, max_buffer_size)`)
:param bool use_list:
If true, unpack msgpack array to Python list.
Otherwise, unpack to Python tuple. (default: True)
:param bool raw:
If true, unpack msgpack raw to Python bytes.
Otherwise, unpack to Python str by decoding with UTF-8 encoding (default).
:param int timestamp:
Control how timestamp type is unpacked:
0 - Timestamp
1 - float (Seconds from the EPOCH)
2 - int (Nanoseconds from the EPOCH)
3 - datetime.datetime (UTC).
:param bool strict_map_key:
If true (default), only str or bytes are accepted for map (dict) keys.
:param object_hook:
When specified, it should be callable.
Unpacker calls it with a dict argument after unpacking msgpack map.
(See also simplejson)
:param object_pairs_hook:
When specified, it should be callable.
Unpacker calls it with a list of key-value pairs after unpacking msgpack map.
(See also simplejson)
:param str unicode_errors:
The error handler for decoding unicode. (default: 'strict')
This option should be used only when you have msgpack data which
contains invalid UTF-8 string.
:param int max_buffer_size:
Limits size of data waiting unpacked. 0 means 2**32-1.
The default value is 100*1024*1024 (100MiB).
Raises `BufferFull` exception when it is insufficient.
You should set this parameter when unpacking data from untrusted source.
:param int max_str_len:
Deprecated, use *max_buffer_size* instead.
Limits max length of str. (default: max_buffer_size)
:param int max_bin_len:
Deprecated, use *max_buffer_size* instead.
Limits max length of bin. (default: max_buffer_size)
:param int max_array_len:
Limits max length of array.
(default: max_buffer_size)
:param int max_map_len:
Limits max length of map.
(default: max_buffer_size//2)
:param int max_ext_len:
Deprecated, use *max_buffer_size* instead.
Limits max size of ext type. (default: max_buffer_size)
Example of streaming deserialize from file-like object::
unpacker = Unpacker(file_like)
for o in unpacker:
process(o)
Example of streaming deserialize from socket::
unpacker = Unpacker()
while True:
buf = sock.recv(1024**2)
if not buf:
break
unpacker.feed(buf)
for o in unpacker:
process(o)
Raises ``ExtraData`` when *packed* contains extra bytes.
Raises ``OutOfData`` when *packed* is incomplete.
Raises ``FormatError`` when *packed* is not valid msgpack.
Raises ``StackError`` when *packed* contains too nested.
Other exceptions can be raised during unpacking.
"""
cdef unpack_context ctx
cdef char* buf
cdef Py_ssize_t buf_size, buf_head, buf_tail
cdef object file_like
cdef object file_like_read
cdef Py_ssize_t read_size
# To maintain refcnt.
cdef object object_hook, object_pairs_hook, list_hook, ext_hook
cdef object unicode_errors
cdef Py_ssize_t max_buffer_size
cdef uint64_t stream_offset
def __cinit__(self):
self.buf = NULL
def __dealloc__(self):
PyMem_Free(self.buf)
self.buf = NULL
@cython.critical_section
def __init__(self, file_like=None, *, Py_ssize_t read_size=0,
bint use_list=True, bint raw=False, int timestamp=0, bint strict_map_key=True,
object object_hook=None, object object_pairs_hook=None, object list_hook=None,
unicode_errors=None, Py_ssize_t max_buffer_size=100*1024*1024,
object ext_hook=ExtType,
Py_ssize_t max_str_len=-1,
Py_ssize_t max_bin_len=-1,
Py_ssize_t max_array_len=-1,
Py_ssize_t max_map_len=-1,
Py_ssize_t max_ext_len=-1):
cdef const char *cerr=NULL
self.object_hook = object_hook
self.object_pairs_hook = object_pairs_hook
self.list_hook = list_hook
self.ext_hook = ext_hook
self.file_like = file_like
if file_like:
self.file_like_read = file_like.read
if not PyCallable_Check(self.file_like_read):
raise TypeError("`file_like.read` must be a callable.")
if not max_buffer_size:
max_buffer_size = INT_MAX
if max_str_len == -1:
max_str_len = max_buffer_size
if max_bin_len == -1:
max_bin_len = max_buffer_size
if max_array_len == -1:
max_array_len = max_buffer_size
if max_map_len == -1:
max_map_len = max_buffer_size//2
if max_ext_len == -1:
max_ext_len = max_buffer_size
if read_size > max_buffer_size:
raise ValueError("read_size should be less or equal to max_buffer_size")
if not read_size:
read_size = min(max_buffer_size, 1024**2)
self.max_buffer_size = max_buffer_size
self.read_size = read_size
self.buf = <char*>PyMem_Malloc(read_size)
if self.buf == NULL:
raise MemoryError("Unable to allocate internal buffer.")
self.buf_size = read_size
self.buf_head = 0
self.buf_tail = 0
self.stream_offset = 0
if unicode_errors is not None:
self.unicode_errors = unicode_errors
cerr = unicode_errors
init_ctx(&self.ctx, object_hook, object_pairs_hook, list_hook,
ext_hook, use_list, raw, timestamp, strict_map_key, cerr,
max_str_len, max_bin_len, max_array_len,
max_map_len, max_ext_len)
@cython.critical_section
def feed(self, object next_bytes):
"""Append `next_bytes` to internal buffer."""
cdef Py_buffer pybuff
cdef char* buf
cdef Py_ssize_t buf_len
if self.file_like is not None:
raise AssertionError(
"unpacker.feed() is not be able to use with `file_like`.")
get_data_from_buffer(next_bytes, &pybuff, &buf, &buf_len)
try:
self.append_buffer(buf, buf_len)
finally:
PyBuffer_Release(&pybuff)
cdef append_buffer(self, void* _buf, Py_ssize_t _buf_len):
cdef:
char* buf = self.buf
char* new_buf
Py_ssize_t head = self.buf_head
Py_ssize_t tail = self.buf_tail
Py_ssize_t buf_size = self.buf_size
Py_ssize_t new_size
if tail + _buf_len > buf_size:
if ((tail - head) + _buf_len) <= buf_size:
# move to front.
memmove(buf, buf + head, tail - head)
tail -= head
head = 0
else:
# expand buffer.
new_size = (tail-head) + _buf_len
if new_size > self.max_buffer_size:
raise BufferFull
new_size = min(new_size*2, self.max_buffer_size)
new_buf = <char*>PyMem_Malloc(new_size)
if new_buf == NULL:
# self.buf still holds old buffer and will be freed during
# obj destruction
raise MemoryError("Unable to enlarge internal buffer.")
memcpy(new_buf, buf + head, tail - head)
PyMem_Free(buf)
buf = new_buf
buf_size = new_size
tail -= head
head = 0
memcpy(buf + tail, <char*>(_buf), _buf_len)
self.buf = buf
self.buf_head = head
self.buf_size = buf_size
self.buf_tail = tail + _buf_len
cdef int read_from_file(self) except -1:
cdef Py_ssize_t remains = self.max_buffer_size - (self.buf_tail - self.buf_head)
if remains <= 0:
raise BufferFull
next_bytes = self.file_like_read(min(self.read_size, remains))
if next_bytes:
self.append_buffer(PyBytes_AsString(next_bytes), PyBytes_Size(next_bytes))
else:
self.file_like = None
return 0
cdef object _unpack(self, execute_fn execute, bint iter=0):
cdef int ret
cdef object obj
cdef Py_ssize_t prev_head
while 1:
prev_head = self.buf_head
if prev_head < self.buf_tail:
ret = execute(&self.ctx, self.buf, self.buf_tail, &self.buf_head)
self.stream_offset += self.buf_head - prev_head
else:
ret = 0
if ret == 1:
obj = unpack_data(&self.ctx)
unpack_init(&self.ctx)
return obj
elif ret == 0:
if self.file_like is not None:
self.read_from_file()
continue
if iter:
raise StopIteration("No more data to unpack.")
else:
raise OutOfData("No more data to unpack.")
elif ret == -2:
raise FormatError
elif ret == -3:
raise StackError
else:
raise ValueError("Unpack failed: error = %d" % (ret,))
@cython.critical_section
def read_bytes(self, Py_ssize_t nbytes):
"""Read a specified number of raw bytes from the stream"""
cdef Py_ssize_t nread
nread = min(self.buf_tail - self.buf_head, nbytes)
ret = PyBytes_FromStringAndSize(self.buf + self.buf_head, nread)
self.buf_head += nread
if nread < nbytes and self.file_like is not None:
ret += self.file_like.read(nbytes - nread)
nread = len(ret)
self.stream_offset += nread
return ret
@cython.critical_section
def unpack(self):
"""Unpack one object
Raises `OutOfData` when there are no more bytes to unpack.
"""
return self._unpack(unpack_construct)
@cython.critical_section
def skip(self):
"""Read and ignore one object, returning None
Raises `OutOfData` when there are no more bytes to unpack.
"""
return self._unpack(unpack_skip)
@cython.critical_section
def read_array_header(self):
"""assuming the next object is an array, return its size n, such that
the next n unpack() calls will iterate over its contents.
Raises `OutOfData` when there are no more bytes to unpack.
"""
return self._unpack(read_array_header)
@cython.critical_section
def read_map_header(self):
"""assuming the next object is a map, return its size n, such that the
next n * 2 unpack() calls will iterate over its key-value pairs.
Raises `OutOfData` when there are no more bytes to unpack.
"""
return self._unpack(read_map_header)
@cython.critical_section
def tell(self):
"""Returns the current position of the Unpacker in bytes, i.e., the
number of bytes that were read from the input, also the starting
position of the next object.
"""
return self.stream_offset
def __iter__(self):
return self
@cython.critical_section
def __next__(self):
return self._unpack(unpack_construct, 1)
# for debug.
#def _buf(self):
# return PyString_FromStringAndSize(self.buf, self.buf_tail)
#def _off(self):
# return self.buf_head

48
msgpack/exceptions.py Normal file
View file

@ -0,0 +1,48 @@
class UnpackException(Exception):
"""Base class for some exceptions raised while unpacking.
NOTE: unpack may raise exception other than subclass of
UnpackException. If you want to catch all error, catch
Exception instead.
"""
class BufferFull(UnpackException):
pass
class OutOfData(UnpackException):
pass
class FormatError(ValueError, UnpackException):
"""Invalid msgpack format"""
class StackError(ValueError, UnpackException):
"""Too nested"""
# Deprecated. Use ValueError instead
UnpackValueError = ValueError
class ExtraData(UnpackValueError):
"""ExtraData is raised when there is trailing data.
This exception is raised while only one-shot (not streaming)
unpack.
"""
def __init__(self, unpacked, extra):
self.unpacked = unpacked
self.extra = extra
def __str__(self):
return "unpack(b) received extra data."
# Deprecated. Use Exception instead to catch all exception during packing.
PackException = Exception
PackValueError = ValueError
PackOverflowError = OverflowError

170
msgpack/ext.py Normal file
View file

@ -0,0 +1,170 @@
import datetime
import struct
from collections import namedtuple
class ExtType(namedtuple("ExtType", "code data")):
"""ExtType represents ext type in msgpack."""
def __new__(cls, code, data):
if not isinstance(code, int):
raise TypeError("code must be int")
if not isinstance(data, bytes):
raise TypeError("data must be bytes")
if not 0 <= code <= 127:
raise ValueError("code must be 0~127")
return super().__new__(cls, code, data)
class Timestamp:
"""Timestamp represents the Timestamp extension type in msgpack.
When built with Cython, msgpack uses C methods to pack and unpack `Timestamp`.
When using pure-Python msgpack, :func:`to_bytes` and :func:`from_bytes` are used to pack and
unpack `Timestamp`.
This class is immutable: Do not override seconds and nanoseconds.
"""
__slots__ = ["seconds", "nanoseconds"]
def __init__(self, seconds, nanoseconds=0):
"""Initialize a Timestamp object.
:param int seconds:
Number of seconds since the UNIX epoch (00:00:00 UTC Jan 1 1970, minus leap seconds).
May be negative.
:param int nanoseconds:
Number of nanoseconds to add to `seconds` to get fractional time.
Maximum is 999_999_999. Default is 0.
Note: Negative times (before the UNIX epoch) are represented as neg. seconds + pos. ns.
"""
if not isinstance(seconds, int):
raise TypeError("seconds must be an integer")
if not isinstance(nanoseconds, int):
raise TypeError("nanoseconds must be an integer")
if not (0 <= nanoseconds < 10**9):
raise ValueError("nanoseconds must be a non-negative integer less than 999999999.")
self.seconds = seconds
self.nanoseconds = nanoseconds
def __repr__(self):
"""String representation of Timestamp."""
return f"Timestamp(seconds={self.seconds}, nanoseconds={self.nanoseconds})"
def __eq__(self, other):
"""Check for equality with another Timestamp object"""
if type(other) is self.__class__:
return self.seconds == other.seconds and self.nanoseconds == other.nanoseconds
return False
def __ne__(self, other):
"""not-equals method (see :func:`__eq__()`)"""
return not self.__eq__(other)
def __hash__(self):
return hash((self.seconds, self.nanoseconds))
@staticmethod
def from_bytes(b):
"""Unpack bytes into a `Timestamp` object.
Used for pure-Python msgpack unpacking.
:param b: Payload from msgpack ext message with code -1
:type b: bytes
:returns: Timestamp object unpacked from msgpack ext payload
:rtype: Timestamp
"""
if len(b) == 4:
seconds = struct.unpack("!L", b)[0]
nanoseconds = 0
elif len(b) == 8:
data64 = struct.unpack("!Q", b)[0]
seconds = data64 & 0x00000003FFFFFFFF
nanoseconds = data64 >> 34
elif len(b) == 12:
nanoseconds, seconds = struct.unpack("!Iq", b)
else:
raise ValueError(
"Timestamp type can only be created from 32, 64, or 96-bit byte objects"
)
return Timestamp(seconds, nanoseconds)
def to_bytes(self):
"""Pack this Timestamp object into bytes.
Used for pure-Python msgpack packing.
:returns data: Payload for EXT message with code -1 (timestamp type)
:rtype: bytes
"""
if (self.seconds >> 34) == 0: # seconds is non-negative and fits in 34 bits
data64 = self.nanoseconds << 34 | self.seconds
if data64 & 0xFFFFFFFF00000000 == 0:
# nanoseconds is zero and seconds < 2**32, so timestamp 32
data = struct.pack("!L", data64)
else:
# timestamp 64
data = struct.pack("!Q", data64)
else:
# timestamp 96
data = struct.pack("!Iq", self.nanoseconds, self.seconds)
return data
@staticmethod
def from_unix(unix_sec):
"""Create a Timestamp from posix timestamp in seconds.
:param unix_float: Posix timestamp in seconds.
:type unix_float: int or float
"""
seconds = int(unix_sec // 1)
nanoseconds = int((unix_sec % 1) * 10**9)
return Timestamp(seconds, nanoseconds)
def to_unix(self):
"""Get the timestamp as a floating-point value.
:returns: posix timestamp
:rtype: float
"""
return self.seconds + self.nanoseconds / 1e9
@staticmethod
def from_unix_nano(unix_ns):
"""Create a Timestamp from posix timestamp in nanoseconds.
:param int unix_ns: Posix timestamp in nanoseconds.
:rtype: Timestamp
"""
return Timestamp(*divmod(unix_ns, 10**9))
def to_unix_nano(self):
"""Get the timestamp as a unixtime in nanoseconds.
:returns: posix timestamp in nanoseconds
:rtype: int
"""
return self.seconds * 10**9 + self.nanoseconds
def to_datetime(self):
"""Get the timestamp as a UTC datetime.
:rtype: `datetime.datetime`
"""
utc = datetime.timezone.utc
return datetime.datetime.fromtimestamp(0, utc) + datetime.timedelta(
seconds=self.seconds, microseconds=self.nanoseconds // 1000
)
@staticmethod
def from_datetime(dt):
"""Create a Timestamp from datetime with tzinfo.
:rtype: Timestamp
"""
return Timestamp(seconds=int(dt.timestamp()), nanoseconds=dt.microsecond * 1000)

929
msgpack/fallback.py Normal file
View file

@ -0,0 +1,929 @@
"""Fallback pure Python implementation of msgpack"""
import struct
import sys
from datetime import datetime as _DateTime
if hasattr(sys, "pypy_version_info"):
from __pypy__ import newlist_hint
from __pypy__.builders import BytesBuilder
_USING_STRINGBUILDER = True
class BytesIO:
def __init__(self, s=b""):
if s:
self.builder = BytesBuilder(len(s))
self.builder.append(s)
else:
self.builder = BytesBuilder()
def write(self, s):
if isinstance(s, memoryview):
s = s.tobytes()
elif isinstance(s, bytearray):
s = bytes(s)
self.builder.append(s)
def getvalue(self):
return self.builder.build()
else:
from io import BytesIO
_USING_STRINGBUILDER = False
def newlist_hint(size):
return []
from .exceptions import BufferFull, ExtraData, FormatError, OutOfData, StackError
from .ext import ExtType, Timestamp
EX_SKIP = 0
EX_CONSTRUCT = 1
EX_READ_ARRAY_HEADER = 2
EX_READ_MAP_HEADER = 3
TYPE_IMMEDIATE = 0
TYPE_ARRAY = 1
TYPE_MAP = 2
TYPE_RAW = 3
TYPE_BIN = 4
TYPE_EXT = 5
DEFAULT_RECURSE_LIMIT = 511
def _check_type_strict(obj, t, type=type, tuple=tuple):
if type(t) is tuple:
return type(obj) in t
else:
return type(obj) is t
def _get_data_from_buffer(obj):
view = memoryview(obj)
if view.itemsize != 1:
raise ValueError("cannot unpack from multi-byte object")
return view
def unpackb(packed, **kwargs):
"""
Unpack an object from `packed`.
Raises ``ExtraData`` when *packed* contains extra bytes.
Raises ``ValueError`` when *packed* is incomplete.
Raises ``FormatError`` when *packed* is not valid msgpack.
Raises ``StackError`` when *packed* contains too nested.
Other exceptions can be raised during unpacking.
See :class:`Unpacker` for options.
"""
unpacker = Unpacker(None, max_buffer_size=len(packed), **kwargs)
unpacker.feed(packed)
try:
ret = unpacker._unpack()
except OutOfData:
raise ValueError("Unpack failed: incomplete input")
except RecursionError:
raise StackError
if unpacker._got_extradata():
raise ExtraData(ret, unpacker._get_extradata())
return ret
_NO_FORMAT_USED = ""
_MSGPACK_HEADERS = {
0xC4: (1, _NO_FORMAT_USED, TYPE_BIN),
0xC5: (2, ">H", TYPE_BIN),
0xC6: (4, ">I", TYPE_BIN),
0xC7: (2, "Bb", TYPE_EXT),
0xC8: (3, ">Hb", TYPE_EXT),
0xC9: (5, ">Ib", TYPE_EXT),
0xCA: (4, ">f"),
0xCB: (8, ">d"),
0xCC: (1, _NO_FORMAT_USED),
0xCD: (2, ">H"),
0xCE: (4, ">I"),
0xCF: (8, ">Q"),
0xD0: (1, "b"),
0xD1: (2, ">h"),
0xD2: (4, ">i"),
0xD3: (8, ">q"),
0xD4: (1, "b1s", TYPE_EXT),
0xD5: (2, "b2s", TYPE_EXT),
0xD6: (4, "b4s", TYPE_EXT),
0xD7: (8, "b8s", TYPE_EXT),
0xD8: (16, "b16s", TYPE_EXT),
0xD9: (1, _NO_FORMAT_USED, TYPE_RAW),
0xDA: (2, ">H", TYPE_RAW),
0xDB: (4, ">I", TYPE_RAW),
0xDC: (2, ">H", TYPE_ARRAY),
0xDD: (4, ">I", TYPE_ARRAY),
0xDE: (2, ">H", TYPE_MAP),
0xDF: (4, ">I", TYPE_MAP),
}
class Unpacker:
"""Streaming unpacker.
Arguments:
:param file_like:
File-like object having `.read(n)` method.
If specified, unpacker reads serialized data from it and `.feed()` is not usable.
:param int read_size:
Used as `file_like.read(read_size)`. (default: `min(16*1024, max_buffer_size)`)
:param bool use_list:
If true, unpack msgpack array to Python list.
Otherwise, unpack to Python tuple. (default: True)
:param bool raw:
If true, unpack msgpack raw to Python bytes.
Otherwise, unpack to Python str by decoding with UTF-8 encoding (default).
:param int timestamp:
Control how timestamp type is unpacked:
0 - Timestamp
1 - float (Seconds from the EPOCH)
2 - int (Nanoseconds from the EPOCH)
3 - datetime.datetime (UTC).
:param bool strict_map_key:
If true (default), only str or bytes are accepted for map (dict) keys.
:param object_hook:
When specified, it should be callable.
Unpacker calls it with a dict argument after unpacking msgpack map.
(See also simplejson)
:param object_pairs_hook:
When specified, it should be callable.
Unpacker calls it with a list of key-value pairs after unpacking msgpack map.
(See also simplejson)
:param str unicode_errors:
The error handler for decoding unicode. (default: 'strict')
This option should be used only when you have msgpack data which
contains invalid UTF-8 string.
:param int max_buffer_size:
Limits size of data waiting unpacked. 0 means 2**32-1.
The default value is 100*1024*1024 (100MiB).
Raises `BufferFull` exception when it is insufficient.
You should set this parameter when unpacking data from untrusted source.
:param int max_str_len:
Deprecated, use *max_buffer_size* instead.
Limits max length of str. (default: max_buffer_size)
:param int max_bin_len:
Deprecated, use *max_buffer_size* instead.
Limits max length of bin. (default: max_buffer_size)
:param int max_array_len:
Limits max length of array.
(default: max_buffer_size)
:param int max_map_len:
Limits max length of map.
(default: max_buffer_size//2)
:param int max_ext_len:
Deprecated, use *max_buffer_size* instead.
Limits max size of ext type. (default: max_buffer_size)
Example of streaming deserialize from file-like object::
unpacker = Unpacker(file_like)
for o in unpacker:
process(o)
Example of streaming deserialize from socket::
unpacker = Unpacker()
while True:
buf = sock.recv(1024**2)
if not buf:
break
unpacker.feed(buf)
for o in unpacker:
process(o)
Raises ``ExtraData`` when *packed* contains extra bytes.
Raises ``OutOfData`` when *packed* is incomplete.
Raises ``FormatError`` when *packed* is not valid msgpack.
Raises ``StackError`` when *packed* contains too nested.
Other exceptions can be raised during unpacking.
"""
def __init__(
self,
file_like=None,
*,
read_size=0,
use_list=True,
raw=False,
timestamp=0,
strict_map_key=True,
object_hook=None,
object_pairs_hook=None,
list_hook=None,
unicode_errors=None,
max_buffer_size=100 * 1024 * 1024,
ext_hook=ExtType,
max_str_len=-1,
max_bin_len=-1,
max_array_len=-1,
max_map_len=-1,
max_ext_len=-1,
):
if unicode_errors is None:
unicode_errors = "strict"
if file_like is None:
self._feeding = True
else:
if not callable(file_like.read):
raise TypeError("`file_like.read` must be callable")
self.file_like = file_like
self._feeding = False
#: array of bytes fed.
self._buffer = bytearray()
#: Which position we currently reads
self._buff_i = 0
# When Unpacker is used as an iterable, between the calls to next(),
# the buffer is not "consumed" completely, for efficiency sake.
# Instead, it is done sloppily. To make sure we raise BufferFull at
# the correct moments, we have to keep track of how sloppy we were.
# Furthermore, when the buffer is incomplete (that is: in the case
# we raise an OutOfData) we need to rollback the buffer to the correct
# state, which _buf_checkpoint records.
self._buf_checkpoint = 0
if not max_buffer_size:
max_buffer_size = 2**31 - 1
if max_str_len == -1:
max_str_len = max_buffer_size
if max_bin_len == -1:
max_bin_len = max_buffer_size
if max_array_len == -1:
max_array_len = max_buffer_size
if max_map_len == -1:
max_map_len = max_buffer_size // 2
if max_ext_len == -1:
max_ext_len = max_buffer_size
self._max_buffer_size = max_buffer_size
if read_size > self._max_buffer_size:
raise ValueError("read_size must be smaller than max_buffer_size")
self._read_size = read_size or min(self._max_buffer_size, 16 * 1024)
self._raw = bool(raw)
self._strict_map_key = bool(strict_map_key)
self._unicode_errors = unicode_errors
self._use_list = use_list
if not (0 <= timestamp <= 3):
raise ValueError("timestamp must be 0..3")
self._timestamp = timestamp
self._list_hook = list_hook
self._object_hook = object_hook
self._object_pairs_hook = object_pairs_hook
self._ext_hook = ext_hook
self._max_str_len = max_str_len
self._max_bin_len = max_bin_len
self._max_array_len = max_array_len
self._max_map_len = max_map_len
self._max_ext_len = max_ext_len
self._stream_offset = 0
if list_hook is not None and not callable(list_hook):
raise TypeError("`list_hook` is not callable")
if object_hook is not None and not callable(object_hook):
raise TypeError("`object_hook` is not callable")
if object_pairs_hook is not None and not callable(object_pairs_hook):
raise TypeError("`object_pairs_hook` is not callable")
if object_hook is not None and object_pairs_hook is not None:
raise TypeError("object_pairs_hook and object_hook are mutually exclusive")
if not callable(ext_hook):
raise TypeError("`ext_hook` is not callable")
def feed(self, next_bytes):
assert self._feeding
view = _get_data_from_buffer(next_bytes)
if len(self._buffer) - self._buff_i + len(view) > self._max_buffer_size:
raise BufferFull
# Strip buffer before checkpoint before reading file.
if self._buf_checkpoint > 0:
del self._buffer[: self._buf_checkpoint]
self._buff_i -= self._buf_checkpoint
self._buf_checkpoint = 0
# Use extend here: INPLACE_ADD += doesn't reliably typecast memoryview in jython
self._buffer.extend(view)
view.release()
def _consume(self):
"""Gets rid of the used parts of the buffer."""
self._stream_offset += self._buff_i - self._buf_checkpoint
self._buf_checkpoint = self._buff_i
def _got_extradata(self):
return self._buff_i < len(self._buffer)
def _get_extradata(self):
return self._buffer[self._buff_i :]
def read_bytes(self, n):
ret = self._read(n, raise_outofdata=False)
self._consume()
return ret
def _read(self, n, raise_outofdata=True):
# (int) -> bytearray
self._reserve(n, raise_outofdata=raise_outofdata)
i = self._buff_i
ret = self._buffer[i : i + n]
self._buff_i = i + len(ret)
return ret
def _reserve(self, n, raise_outofdata=True):
remain_bytes = len(self._buffer) - self._buff_i - n
# Fast path: buffer has n bytes already
if remain_bytes >= 0:
return
if self._feeding:
self._buff_i = self._buf_checkpoint
raise OutOfData
# Strip buffer before checkpoint before reading file.
if self._buf_checkpoint > 0:
del self._buffer[: self._buf_checkpoint]
self._buff_i -= self._buf_checkpoint
self._buf_checkpoint = 0
# Read from file
remain_bytes = -remain_bytes
if remain_bytes + len(self._buffer) > self._max_buffer_size:
raise BufferFull
while remain_bytes > 0:
to_read_bytes = max(self._read_size, remain_bytes)
read_data = self.file_like.read(to_read_bytes)
if not read_data:
break
assert isinstance(read_data, bytes)
self._buffer += read_data
remain_bytes -= len(read_data)
if len(self._buffer) < n + self._buff_i and raise_outofdata:
self._buff_i = 0 # rollback
raise OutOfData
def _read_header(self):
typ = TYPE_IMMEDIATE
n = 0
obj = None
self._reserve(1)
b = self._buffer[self._buff_i]
self._buff_i += 1
if b & 0b10000000 == 0:
obj = b
elif b & 0b11100000 == 0b11100000:
obj = -1 - (b ^ 0xFF)
elif b & 0b11100000 == 0b10100000:
n = b & 0b00011111
typ = TYPE_RAW
if n > self._max_str_len:
raise ValueError(f"{n} exceeds max_str_len({self._max_str_len})")
obj = self._read(n)
elif b & 0b11110000 == 0b10010000:
n = b & 0b00001111
typ = TYPE_ARRAY
if n > self._max_array_len:
raise ValueError(f"{n} exceeds max_array_len({self._max_array_len})")
elif b & 0b11110000 == 0b10000000:
n = b & 0b00001111
typ = TYPE_MAP
if n > self._max_map_len:
raise ValueError(f"{n} exceeds max_map_len({self._max_map_len})")
elif b == 0xC0:
obj = None
elif b == 0xC2:
obj = False
elif b == 0xC3:
obj = True
elif 0xC4 <= b <= 0xC6:
size, fmt, typ = _MSGPACK_HEADERS[b]
self._reserve(size)
if len(fmt) > 0:
n = struct.unpack_from(fmt, self._buffer, self._buff_i)[0]
else:
n = self._buffer[self._buff_i]
self._buff_i += size
if n > self._max_bin_len:
raise ValueError(f"{n} exceeds max_bin_len({self._max_bin_len})")
obj = self._read(n)
elif 0xC7 <= b <= 0xC9:
size, fmt, typ = _MSGPACK_HEADERS[b]
self._reserve(size)
L, n = struct.unpack_from(fmt, self._buffer, self._buff_i)
self._buff_i += size
if L > self._max_ext_len:
raise ValueError(f"{L} exceeds max_ext_len({self._max_ext_len})")
obj = self._read(L)
elif 0xCA <= b <= 0xD3:
size, fmt = _MSGPACK_HEADERS[b]
self._reserve(size)
if len(fmt) > 0:
obj = struct.unpack_from(fmt, self._buffer, self._buff_i)[0]
else:
obj = self._buffer[self._buff_i]
self._buff_i += size
elif 0xD4 <= b <= 0xD8:
size, fmt, typ = _MSGPACK_HEADERS[b]
if self._max_ext_len < size:
raise ValueError(f"{size} exceeds max_ext_len({self._max_ext_len})")
self._reserve(size + 1)
n, obj = struct.unpack_from(fmt, self._buffer, self._buff_i)
self._buff_i += size + 1
elif 0xD9 <= b <= 0xDB:
size, fmt, typ = _MSGPACK_HEADERS[b]
self._reserve(size)
if len(fmt) > 0:
(n,) = struct.unpack_from(fmt, self._buffer, self._buff_i)
else:
n = self._buffer[self._buff_i]
self._buff_i += size
if n > self._max_str_len:
raise ValueError(f"{n} exceeds max_str_len({self._max_str_len})")
obj = self._read(n)
elif 0xDC <= b <= 0xDD:
size, fmt, typ = _MSGPACK_HEADERS[b]
self._reserve(size)
(n,) = struct.unpack_from(fmt, self._buffer, self._buff_i)
self._buff_i += size
if n > self._max_array_len:
raise ValueError(f"{n} exceeds max_array_len({self._max_array_len})")
elif 0xDE <= b <= 0xDF:
size, fmt, typ = _MSGPACK_HEADERS[b]
self._reserve(size)
(n,) = struct.unpack_from(fmt, self._buffer, self._buff_i)
self._buff_i += size
if n > self._max_map_len:
raise ValueError(f"{n} exceeds max_map_len({self._max_map_len})")
else:
raise FormatError("Unknown header: 0x%x" % b)
return typ, n, obj
def _unpack(self, execute=EX_CONSTRUCT):
typ, n, obj = self._read_header()
if execute == EX_READ_ARRAY_HEADER:
if typ != TYPE_ARRAY:
raise ValueError("Expected array")
return n
if execute == EX_READ_MAP_HEADER:
if typ != TYPE_MAP:
raise ValueError("Expected map")
return n
# TODO should we eliminate the recursion?
if typ == TYPE_ARRAY:
if execute == EX_SKIP:
for i in range(n):
# TODO check whether we need to call `list_hook`
self._unpack(EX_SKIP)
return
ret = newlist_hint(n)
for i in range(n):
ret.append(self._unpack(EX_CONSTRUCT))
if self._list_hook is not None:
ret = self._list_hook(ret)
# TODO is the interaction between `list_hook` and `use_list` ok?
return ret if self._use_list else tuple(ret)
if typ == TYPE_MAP:
if execute == EX_SKIP:
for i in range(n):
# TODO check whether we need to call hooks
self._unpack(EX_SKIP)
self._unpack(EX_SKIP)
return
if self._object_pairs_hook is not None:
ret = self._object_pairs_hook(
(self._unpack(EX_CONSTRUCT), self._unpack(EX_CONSTRUCT)) for _ in range(n)
)
else:
ret = {}
for _ in range(n):
key = self._unpack(EX_CONSTRUCT)
if self._strict_map_key and type(key) not in (str, bytes):
raise ValueError("%s is not allowed for map key" % str(type(key)))
if isinstance(key, str):
key = sys.intern(key)
ret[key] = self._unpack(EX_CONSTRUCT)
if self._object_hook is not None:
ret = self._object_hook(ret)
return ret
if execute == EX_SKIP:
return
if typ == TYPE_RAW:
if self._raw:
obj = bytes(obj)
else:
obj = obj.decode("utf_8", self._unicode_errors)
return obj
if typ == TYPE_BIN:
return bytes(obj)
if typ == TYPE_EXT:
if n == -1: # timestamp
ts = Timestamp.from_bytes(bytes(obj))
if self._timestamp == 1:
return ts.to_unix()
elif self._timestamp == 2:
return ts.to_unix_nano()
elif self._timestamp == 3:
return ts.to_datetime()
else:
return ts
else:
return self._ext_hook(n, bytes(obj))
assert typ == TYPE_IMMEDIATE
return obj
def __iter__(self):
return self
def __next__(self):
try:
ret = self._unpack(EX_CONSTRUCT)
self._consume()
return ret
except OutOfData:
self._consume()
raise StopIteration
except RecursionError:
raise StackError
next = __next__
def skip(self):
self._unpack(EX_SKIP)
self._consume()
def unpack(self):
try:
ret = self._unpack(EX_CONSTRUCT)
except RecursionError:
raise StackError
self._consume()
return ret
def read_array_header(self):
ret = self._unpack(EX_READ_ARRAY_HEADER)
self._consume()
return ret
def read_map_header(self):
ret = self._unpack(EX_READ_MAP_HEADER)
self._consume()
return ret
def tell(self):
return self._stream_offset
class Packer:
"""
MessagePack Packer
Usage::
packer = Packer()
astream.write(packer.pack(a))
astream.write(packer.pack(b))
Packer's constructor has some keyword arguments:
:param default:
When specified, it should be callable.
Convert user type to builtin type that Packer supports.
See also simplejson's document.
:param bool use_single_float:
Use single precision float type for float. (default: False)
:param bool autoreset:
Reset buffer after each pack and return its content as `bytes`. (default: True).
If set this to false, use `bytes()` to get content and `.reset()` to clear buffer.
:param bool use_bin_type:
Use bin type introduced in msgpack spec 2.0 for bytes.
It also enables str8 type for unicode. (default: True)
:param bool strict_types:
If set to true, types will be checked to be exact. Derived classes
from serializable types will not be serialized and will be
treated as unsupported type and forwarded to default.
Additionally tuples will not be serialized as lists.
This is useful when trying to implement accurate serialization
for python types.
:param bool datetime:
If set to true, datetime with tzinfo is packed into Timestamp type.
Note that the tzinfo is stripped in the timestamp.
You can get UTC datetime with `timestamp=3` option of the Unpacker.
:param str unicode_errors:
The error handler for encoding unicode. (default: 'strict')
DO NOT USE THIS!! This option is kept for very specific usage.
:param int buf_size:
Internal buffer size. This option is used only for C implementation.
"""
def __init__(
self,
*,
default=None,
use_single_float=False,
autoreset=True,
use_bin_type=True,
strict_types=False,
datetime=False,
unicode_errors=None,
buf_size=None,
):
self._strict_types = strict_types
self._use_float = use_single_float
self._autoreset = autoreset
self._use_bin_type = use_bin_type
self._buffer = BytesIO()
self._datetime = bool(datetime)
self._unicode_errors = unicode_errors or "strict"
if default is not None and not callable(default):
raise TypeError("default must be callable")
self._default = default
def _pack(
self,
obj,
nest_limit=DEFAULT_RECURSE_LIMIT,
check=isinstance,
check_type_strict=_check_type_strict,
):
default_used = False
if self._strict_types:
check = check_type_strict
list_types = list
else:
list_types = (list, tuple)
while True:
if nest_limit < 0:
raise ValueError("recursion limit exceeded")
if obj is None:
return self._buffer.write(b"\xc0")
if check(obj, bool):
if obj:
return self._buffer.write(b"\xc3")
return self._buffer.write(b"\xc2")
if check(obj, int):
if 0 <= obj < 0x80:
return self._buffer.write(struct.pack("B", obj))
if -0x20 <= obj < 0:
return self._buffer.write(struct.pack("b", obj))
if 0x80 <= obj <= 0xFF:
return self._buffer.write(struct.pack("BB", 0xCC, obj))
if -0x80 <= obj < 0:
return self._buffer.write(struct.pack(">Bb", 0xD0, obj))
if 0xFF < obj <= 0xFFFF:
return self._buffer.write(struct.pack(">BH", 0xCD, obj))
if -0x8000 <= obj < -0x80:
return self._buffer.write(struct.pack(">Bh", 0xD1, obj))
if 0xFFFF < obj <= 0xFFFFFFFF:
return self._buffer.write(struct.pack(">BI", 0xCE, obj))
if -0x80000000 <= obj < -0x8000:
return self._buffer.write(struct.pack(">Bi", 0xD2, obj))
if 0xFFFFFFFF < obj <= 0xFFFFFFFFFFFFFFFF:
return self._buffer.write(struct.pack(">BQ", 0xCF, obj))
if -0x8000000000000000 <= obj < -0x80000000:
return self._buffer.write(struct.pack(">Bq", 0xD3, obj))
if not default_used and self._default is not None:
obj = self._default(obj)
default_used = True
continue
raise OverflowError("Integer value out of range")
if check(obj, (bytes, bytearray)):
n = len(obj)
if n >= 2**32:
raise ValueError("%s is too large" % type(obj).__name__)
self._pack_bin_header(n)
return self._buffer.write(obj)
if check(obj, str):
obj = obj.encode("utf-8", self._unicode_errors)
n = len(obj)
if n >= 2**32:
raise ValueError("String is too large")
self._pack_raw_header(n)
return self._buffer.write(obj)
if check(obj, memoryview):
n = obj.nbytes
if n >= 2**32:
raise ValueError("Memoryview is too large")
self._pack_bin_header(n)
return self._buffer.write(obj)
if check(obj, float):
if self._use_float:
return self._buffer.write(struct.pack(">Bf", 0xCA, obj))
return self._buffer.write(struct.pack(">Bd", 0xCB, obj))
if check(obj, (ExtType, Timestamp)):
if check(obj, Timestamp):
code = -1
data = obj.to_bytes()
else:
code = obj.code
data = obj.data
assert isinstance(code, int)
assert isinstance(data, bytes)
L = len(data)
if L == 1:
self._buffer.write(b"\xd4")
elif L == 2:
self._buffer.write(b"\xd5")
elif L == 4:
self._buffer.write(b"\xd6")
elif L == 8:
self._buffer.write(b"\xd7")
elif L == 16:
self._buffer.write(b"\xd8")
elif L <= 0xFF:
self._buffer.write(struct.pack(">BB", 0xC7, L))
elif L <= 0xFFFF:
self._buffer.write(struct.pack(">BH", 0xC8, L))
else:
self._buffer.write(struct.pack(">BI", 0xC9, L))
self._buffer.write(struct.pack("b", code))
self._buffer.write(data)
return
if check(obj, list_types):
n = len(obj)
self._pack_array_header(n)
for i in range(n):
self._pack(obj[i], nest_limit - 1)
return
if check(obj, dict):
return self._pack_map_pairs(len(obj), obj.items(), nest_limit - 1)
if self._datetime and check(obj, _DateTime) and obj.tzinfo is not None:
obj = Timestamp.from_datetime(obj)
default_used = 1
continue
if not default_used and self._default is not None:
obj = self._default(obj)
default_used = 1
continue
if self._datetime and check(obj, _DateTime):
raise ValueError(f"Cannot serialize {obj!r} where tzinfo=None")
raise TypeError(f"Cannot serialize {obj!r}")
def pack(self, obj):
try:
self._pack(obj)
except:
self._buffer = BytesIO() # force reset
raise
if self._autoreset:
ret = self._buffer.getvalue()
self._buffer = BytesIO()
return ret
def pack_map_pairs(self, pairs):
self._pack_map_pairs(len(pairs), pairs)
if self._autoreset:
ret = self._buffer.getvalue()
self._buffer = BytesIO()
return ret
def pack_array_header(self, n):
if n >= 2**32:
raise ValueError
self._pack_array_header(n)
if self._autoreset:
ret = self._buffer.getvalue()
self._buffer = BytesIO()
return ret
def pack_map_header(self, n):
if n >= 2**32:
raise ValueError
self._pack_map_header(n)
if self._autoreset:
ret = self._buffer.getvalue()
self._buffer = BytesIO()
return ret
def pack_ext_type(self, typecode, data):
if not isinstance(typecode, int):
raise TypeError("typecode must have int type.")
if not 0 <= typecode <= 127:
raise ValueError("typecode should be 0-127")
if not isinstance(data, bytes):
raise TypeError("data must have bytes type")
L = len(data)
if L > 0xFFFFFFFF:
raise ValueError("Too large data")
if L == 1:
self._buffer.write(b"\xd4")
elif L == 2:
self._buffer.write(b"\xd5")
elif L == 4:
self._buffer.write(b"\xd6")
elif L == 8:
self._buffer.write(b"\xd7")
elif L == 16:
self._buffer.write(b"\xd8")
elif L <= 0xFF:
self._buffer.write(b"\xc7" + struct.pack("B", L))
elif L <= 0xFFFF:
self._buffer.write(b"\xc8" + struct.pack(">H", L))
else:
self._buffer.write(b"\xc9" + struct.pack(">I", L))
self._buffer.write(struct.pack("B", typecode))
self._buffer.write(data)
def _pack_array_header(self, n):
if n <= 0x0F:
return self._buffer.write(struct.pack("B", 0x90 + n))
if n <= 0xFFFF:
return self._buffer.write(struct.pack(">BH", 0xDC, n))
if n <= 0xFFFFFFFF:
return self._buffer.write(struct.pack(">BI", 0xDD, n))
raise ValueError("Array is too large")
def _pack_map_header(self, n):
if n <= 0x0F:
return self._buffer.write(struct.pack("B", 0x80 + n))
if n <= 0xFFFF:
return self._buffer.write(struct.pack(">BH", 0xDE, n))
if n <= 0xFFFFFFFF:
return self._buffer.write(struct.pack(">BI", 0xDF, n))
raise ValueError("Dict is too large")
def _pack_map_pairs(self, n, pairs, nest_limit=DEFAULT_RECURSE_LIMIT):
self._pack_map_header(n)
for k, v in pairs:
self._pack(k, nest_limit - 1)
self._pack(v, nest_limit - 1)
def _pack_raw_header(self, n):
if n <= 0x1F:
self._buffer.write(struct.pack("B", 0xA0 + n))
elif self._use_bin_type and n <= 0xFF:
self._buffer.write(struct.pack(">BB", 0xD9, n))
elif n <= 0xFFFF:
self._buffer.write(struct.pack(">BH", 0xDA, n))
elif n <= 0xFFFFFFFF:
self._buffer.write(struct.pack(">BI", 0xDB, n))
else:
raise ValueError("Raw is too large")
def _pack_bin_header(self, n):
if not self._use_bin_type:
return self._pack_raw_header(n)
elif n <= 0xFF:
return self._buffer.write(struct.pack(">BB", 0xC4, n))
elif n <= 0xFFFF:
return self._buffer.write(struct.pack(">BH", 0xC5, n))
elif n <= 0xFFFFFFFF:
return self._buffer.write(struct.pack(">BI", 0xC6, n))
else:
raise ValueError("Bin is too large")
def bytes(self):
"""Return internal buffer contents as bytes object"""
return self._buffer.getvalue()
def reset(self):
"""Reset internal buffer.
This method is useful only when autoreset=False.
"""
self._buffer = BytesIO()
def getbuffer(self):
"""Return view of internal buffer."""
if _USING_STRINGBUILDER:
return memoryview(self.bytes())
else:
return self._buffer.getbuffer()

View file

@ -19,56 +19,23 @@
#include <stddef.h>
#include <stdlib.h>
#include "sysdep.h"
#include "pack_define.h"
#include <limits.h>
#include <string.h>
#include <stdbool.h>
#ifdef __cplusplus
extern "C" {
#endif
#ifdef _MSC_VER
#define inline __inline
#endif
typedef struct msgpack_packer {
char *buf;
size_t length;
size_t buf_size;
bool use_bin_type;
} msgpack_packer;
typedef struct Packer Packer;
static inline int msgpack_pack_short(msgpack_packer* pk, short d);
static inline int msgpack_pack_int(msgpack_packer* pk, int d);
static inline int msgpack_pack_long(msgpack_packer* pk, long d);
static inline int msgpack_pack_long_long(msgpack_packer* pk, long long d);
static inline int msgpack_pack_unsigned_short(msgpack_packer* pk, unsigned short d);
static inline int msgpack_pack_unsigned_int(msgpack_packer* pk, unsigned int d);
static inline int msgpack_pack_unsigned_long(msgpack_packer* pk, unsigned long d);
static inline int msgpack_pack_unsigned_long_long(msgpack_packer* pk, unsigned long long d);
static inline int msgpack_pack_uint8(msgpack_packer* pk, uint8_t d);
static inline int msgpack_pack_uint16(msgpack_packer* pk, uint16_t d);
static inline int msgpack_pack_uint32(msgpack_packer* pk, uint32_t d);
static inline int msgpack_pack_uint64(msgpack_packer* pk, uint64_t d);
static inline int msgpack_pack_int8(msgpack_packer* pk, int8_t d);
static inline int msgpack_pack_int16(msgpack_packer* pk, int16_t d);
static inline int msgpack_pack_int32(msgpack_packer* pk, int32_t d);
static inline int msgpack_pack_int64(msgpack_packer* pk, int64_t d);
static inline int msgpack_pack_float(msgpack_packer* pk, float d);
static inline int msgpack_pack_double(msgpack_packer* pk, double d);
static inline int msgpack_pack_nil(msgpack_packer* pk);
static inline int msgpack_pack_true(msgpack_packer* pk);
static inline int msgpack_pack_false(msgpack_packer* pk);
static inline int msgpack_pack_array(msgpack_packer* pk, unsigned int n);
static inline int msgpack_pack_map(msgpack_packer* pk, unsigned int n);
static inline int msgpack_pack_raw(msgpack_packer* pk, size_t l);
static inline int msgpack_pack_raw_body(msgpack_packer* pk, const void* b, size_t l);
static inline int msgpack_pack_write(msgpack_packer* pk, const char *data, size_t l)
{
char* buf = pk->buf;
@ -77,8 +44,11 @@ static inline int msgpack_pack_write(msgpack_packer* pk, const char *data, size_
if (len + l > bs) {
bs = (len + l) * 2;
buf = realloc(buf, bs);
if (!buf) return -1;
buf = (char*)PyMem_Realloc(buf, bs);
if (!buf) {
PyErr_NoMemory();
return -1;
}
}
memcpy(buf + len, data, l);
len += l;
@ -89,14 +59,6 @@ static inline int msgpack_pack_write(msgpack_packer* pk, const char *data, size_
return 0;
}
#define msgpack_pack_inline_func(name) \
static inline int msgpack_pack ## name
#define msgpack_pack_inline_func_cint(name) \
static inline int msgpack_pack ## name
#define msgpack_pack_user msgpack_packer*
#define msgpack_pack_append_buffer(user, buf, len) \
return msgpack_pack_write(user, (const char*)buf, len)

View file

@ -1,25 +0,0 @@
/*
* MessagePack unpacking routine template
*
* Copyright (C) 2008-2009 FURUHASHI Sadayuki
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef MSGPACK_PACK_DEFINE_H__
#define MSGPACK_PACK_DEFINE_H__
#include "sysdep.h"
#include <limits.h>
#endif /* msgpack/pack_define.h */

File diff suppressed because it is too large Load diff

View file

@ -1,7 +1,7 @@
/*
* MessagePack system dependencies
*
* Copyright (C) 2008-2009 FURUHASHI Sadayuki
* Copyright (C) 2008-2010 FURUHASHI Sadayuki
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -18,8 +18,9 @@
#ifndef MSGPACK_SYSDEP_H__
#define MSGPACK_SYSDEP_H__
#ifdef _MSC_VER
#include <stdlib.h>
#include <stddef.h>
#if defined(_MSC_VER) && _MSC_VER < 1600
typedef __int8 int8_t;
typedef unsigned __int8 uint8_t;
typedef __int16 int16_t;
@ -28,44 +29,86 @@ typedef __int32 int32_t;
typedef unsigned __int32 uint32_t;
typedef __int64 int64_t;
typedef unsigned __int64 uint64_t;
#elif defined(_MSC_VER) // && _MSC_VER >= 1600
#include <stdint.h>
#else
#include <stddef.h>
#include <stdint.h>
#include <stdbool.h>
#endif
#ifdef _WIN32
#define _msgpack_atomic_counter_header <windows.h>
typedef long _msgpack_atomic_counter_t;
#define _msgpack_sync_decr_and_fetch(ptr) InterlockedDecrement(ptr)
#define _msgpack_sync_incr_and_fetch(ptr) InterlockedIncrement(ptr)
#elif defined(__GNUC__) && ((__GNUC__*10 + __GNUC_MINOR__) < 41)
#define _msgpack_atomic_counter_header "gcc_atomic.h"
#else
typedef unsigned int _msgpack_atomic_counter_t;
#define _msgpack_sync_decr_and_fetch(ptr) __sync_sub_and_fetch(ptr, 1)
#define _msgpack_sync_incr_and_fetch(ptr) __sync_add_and_fetch(ptr, 1)
#endif
#ifdef _WIN32
#include <winsock2.h>
#else
#include <arpa/inet.h> /* __BYTE_ORDER */
#ifdef __cplusplus
/* numeric_limits<T>::min,max */
#ifdef max
#undef max
#endif
#ifdef min
#undef min
#endif
#endif
#else /* _WIN32 */
#include <arpa/inet.h> /* ntohs, ntohl */
#endif
#if !defined(__LITTLE_ENDIAN__) && !defined(__BIG_ENDIAN__)
#if __BYTE_ORDER == __LITTLE_ENDIAN
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
#define __LITTLE_ENDIAN__
#elif __BYTE_ORDER == __BIG_ENDIAN
#elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
#define __BIG_ENDIAN__
#elif _WIN32
#define __LITTLE_ENDIAN__
#endif
#endif
#ifdef __LITTLE_ENDIAN__
#define _msgpack_be16(x) ntohs(x)
#define _msgpack_be32(x) ntohl(x)
#ifdef _WIN32
# if defined(ntohs)
# define _msgpack_be16(x) ntohs(x)
# elif defined(_byteswap_ushort) || (defined(_MSC_VER) && _MSC_VER >= 1400)
# define _msgpack_be16(x) ((uint16_t)_byteswap_ushort((unsigned short)x))
# else
# define _msgpack_be16(x) ( \
((((uint16_t)x) << 8) ) | \
((((uint16_t)x) >> 8) ) )
# endif
#else
# define _msgpack_be16(x) ntohs(x)
#endif
#if defined(_byteswap_uint64)
#ifdef _WIN32
# if defined(ntohl)
# define _msgpack_be32(x) ntohl(x)
# elif defined(_byteswap_ulong) || defined(_MSC_VER)
# define _msgpack_be32(x) ((uint32_t)_byteswap_ulong((unsigned long)x))
# else
# define _msgpack_be32(x) \
( ((((uint32_t)x) << 24) ) | \
((((uint32_t)x) << 8) & 0x00ff0000U ) | \
((((uint32_t)x) >> 8) & 0x0000ff00U ) | \
((((uint32_t)x) >> 24) ) )
# endif
#else
# define _msgpack_be32(x) ntohl(x)
#endif
#if defined(_byteswap_uint64) || defined(_MSC_VER)
# define _msgpack_be64(x) (_byteswap_uint64(x))
#elif defined(bswap_64)
# define _msgpack_be64(x) bswap_64(x)
@ -73,22 +116,79 @@ typedef unsigned int _msgpack_atomic_counter_t;
# define _msgpack_be64(x) __DARWIN_OSSwapInt64(x)
#else
#define _msgpack_be64(x) \
( ((((uint64_t)x) << 56) & 0xff00000000000000ULL ) | \
((((uint64_t)x) << 40) & 0x00ff000000000000ULL ) | \
((((uint64_t)x) << 24) & 0x0000ff0000000000ULL ) | \
((((uint64_t)x) << 8) & 0x000000ff00000000ULL ) | \
((((uint64_t)x) >> 8) & 0x00000000ff000000ULL ) | \
((((uint64_t)x) >> 24) & 0x0000000000ff0000ULL ) | \
((((uint64_t)x) >> 40) & 0x000000000000ff00ULL ) | \
((((uint64_t)x) >> 56) & 0x00000000000000ffULL ) )
( ((((uint64_t)x) << 56) ) | \
((((uint64_t)x) << 40) & 0x00ff000000000000ULL ) | \
((((uint64_t)x) << 24) & 0x0000ff0000000000ULL ) | \
((((uint64_t)x) << 8) & 0x000000ff00000000ULL ) | \
((((uint64_t)x) >> 8) & 0x00000000ff000000ULL ) | \
((((uint64_t)x) >> 24) & 0x0000000000ff0000ULL ) | \
((((uint64_t)x) >> 40) & 0x000000000000ff00ULL ) | \
((((uint64_t)x) >> 56) ) )
#endif
#define _msgpack_load16(cast, from) ((cast)( \
(((uint16_t)((uint8_t*)(from))[0]) << 8) | \
(((uint16_t)((uint8_t*)(from))[1]) ) ))
#define _msgpack_load32(cast, from) ((cast)( \
(((uint32_t)((uint8_t*)(from))[0]) << 24) | \
(((uint32_t)((uint8_t*)(from))[1]) << 16) | \
(((uint32_t)((uint8_t*)(from))[2]) << 8) | \
(((uint32_t)((uint8_t*)(from))[3]) ) ))
#define _msgpack_load64(cast, from) ((cast)( \
(((uint64_t)((uint8_t*)(from))[0]) << 56) | \
(((uint64_t)((uint8_t*)(from))[1]) << 48) | \
(((uint64_t)((uint8_t*)(from))[2]) << 40) | \
(((uint64_t)((uint8_t*)(from))[3]) << 32) | \
(((uint64_t)((uint8_t*)(from))[4]) << 24) | \
(((uint64_t)((uint8_t*)(from))[5]) << 16) | \
(((uint64_t)((uint8_t*)(from))[6]) << 8) | \
(((uint64_t)((uint8_t*)(from))[7]) ) ))
#else
#define _msgpack_be16(x) (x)
#define _msgpack_be32(x) (x)
#define _msgpack_be64(x) (x)
#define _msgpack_load16(cast, from) ((cast)( \
(((uint16_t)((uint8_t*)from)[0]) << 8) | \
(((uint16_t)((uint8_t*)from)[1]) ) ))
#define _msgpack_load32(cast, from) ((cast)( \
(((uint32_t)((uint8_t*)from)[0]) << 24) | \
(((uint32_t)((uint8_t*)from)[1]) << 16) | \
(((uint32_t)((uint8_t*)from)[2]) << 8) | \
(((uint32_t)((uint8_t*)from)[3]) ) ))
#define _msgpack_load64(cast, from) ((cast)( \
(((uint64_t)((uint8_t*)from)[0]) << 56) | \
(((uint64_t)((uint8_t*)from)[1]) << 48) | \
(((uint64_t)((uint8_t*)from)[2]) << 40) | \
(((uint64_t)((uint8_t*)from)[3]) << 32) | \
(((uint64_t)((uint8_t*)from)[4]) << 24) | \
(((uint64_t)((uint8_t*)from)[5]) << 16) | \
(((uint64_t)((uint8_t*)from)[6]) << 8) | \
(((uint64_t)((uint8_t*)from)[7]) ) ))
#endif
#endif /* msgpack/sysdep.h */
#define _msgpack_store16(to, num) \
do { uint16_t val = _msgpack_be16(num); memcpy(to, &val, 2); } while(0)
#define _msgpack_store32(to, num) \
do { uint32_t val = _msgpack_be32(num); memcpy(to, &val, 4); } while(0)
#define _msgpack_store64(to, num) \
do { uint64_t val = _msgpack_be64(num); memcpy(to, &val, 8); } while(0)
/*
#define _msgpack_load16(cast, from) \
({ cast val; memcpy(&val, (char*)from, 2); _msgpack_be16(val); })
#define _msgpack_load32(cast, from) \
({ cast val; memcpy(&val, (char*)from, 4); _msgpack_be32(val); })
#define _msgpack_load64(cast, from) \
({ cast val; memcpy(&val, (char*)from, 8); _msgpack_be64(val); })
*/
#endif /* msgpack/sysdep.h */

View file

@ -16,61 +16,65 @@
* limitations under the License.
*/
#define MSGPACK_MAX_STACK_SIZE (1024)
#define MSGPACK_EMBED_STACK_SIZE (1024)
#include "unpack_define.h"
typedef struct unpack_user {
int use_list;
bool use_list;
bool raw;
bool has_pairs_hook;
bool strict_map_key;
int timestamp;
PyObject *object_hook;
PyObject *list_hook;
const char *encoding;
PyObject *ext_hook;
PyObject *timestamp_t;
PyObject *giga;
PyObject *utc;
const char *unicode_errors;
Py_ssize_t max_str_len, max_bin_len, max_array_len, max_map_len, max_ext_len;
} unpack_user;
typedef PyObject* msgpack_unpack_object;
struct unpack_context;
typedef struct unpack_context unpack_context;
typedef int (*execute_fn)(unpack_context *ctx, const char* data, Py_ssize_t len, Py_ssize_t* off);
#define msgpack_unpack_struct(name) \
struct template ## name
#define msgpack_unpack_func(ret, name) \
static inline ret template ## name
#define msgpack_unpack_callback(name) \
template_callback ## name
#define msgpack_unpack_object PyObject*
#define msgpack_unpack_user unpack_user
struct template_context;
typedef struct template_context template_context;
static inline msgpack_unpack_object template_callback_root(unpack_user* u)
static inline msgpack_unpack_object unpack_callback_root(unpack_user* u)
{
return NULL;
}
static inline int template_callback_uint16(unpack_user* u, uint16_t d, msgpack_unpack_object* o)
static inline int unpack_callback_uint16(unpack_user* u, uint16_t d, msgpack_unpack_object* o)
{
PyObject *p = PyInt_FromLong((long)d);
PyObject *p = PyLong_FromLong((long)d);
if (!p)
return -1;
*o = p;
return 0;
}
static inline int template_callback_uint8(unpack_user* u, uint8_t d, msgpack_unpack_object* o)
static inline int unpack_callback_uint8(unpack_user* u, uint8_t d, msgpack_unpack_object* o)
{
return template_callback_uint16(u, d, o);
return unpack_callback_uint16(u, d, o);
}
static inline int template_callback_uint32(unpack_user* u, uint32_t d, msgpack_unpack_object* o)
static inline int unpack_callback_uint32(unpack_user* u, uint32_t d, msgpack_unpack_object* o)
{
PyObject *p = PyLong_FromSize_t((size_t)d);
if (!p)
return -1;
*o = p;
return 0;
}
static inline int unpack_callback_uint64(unpack_user* u, uint64_t d, msgpack_unpack_object* o)
{
PyObject *p;
if (d > LONG_MAX) {
p = PyLong_FromUnsignedLong((unsigned long)d);
p = PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG)d);
} else {
p = PyInt_FromLong((long)d);
p = PyLong_FromLong((long)d);
}
if (!p)
return -1;
@ -78,44 +82,38 @@ static inline int template_callback_uint32(unpack_user* u, uint32_t d, msgpack_u
return 0;
}
static inline int template_callback_uint64(unpack_user* u, uint64_t d, msgpack_unpack_object* o)
static inline int unpack_callback_int32(unpack_user* u, int32_t d, msgpack_unpack_object* o)
{
PyObject *p = PyLong_FromUnsignedLongLong(d);
PyObject *p = PyLong_FromLong(d);
if (!p)
return -1;
*o = p;
return 0;
}
static inline int template_callback_int32(unpack_user* u, int32_t d, msgpack_unpack_object* o)
static inline int unpack_callback_int16(unpack_user* u, int16_t d, msgpack_unpack_object* o)
{
PyObject *p = PyInt_FromLong(d);
if (!p)
return -1;
return unpack_callback_int32(u, d, o);
}
static inline int unpack_callback_int8(unpack_user* u, int8_t d, msgpack_unpack_object* o)
{
return unpack_callback_int32(u, d, o);
}
static inline int unpack_callback_int64(unpack_user* u, int64_t d, msgpack_unpack_object* o)
{
PyObject *p;
if (d > LONG_MAX || d < LONG_MIN) {
p = PyLong_FromLongLong((PY_LONG_LONG)d);
} else {
p = PyLong_FromLong((long)d);
}
*o = p;
return 0;
}
static inline int template_callback_int16(unpack_user* u, int16_t d, msgpack_unpack_object* o)
{
return template_callback_int32(u, d, o);
}
static inline int template_callback_int8(unpack_user* u, int8_t d, msgpack_unpack_object* o)
{
return template_callback_int32(u, d, o);
}
static inline int template_callback_int64(unpack_user* u, int64_t d, msgpack_unpack_object* o)
{
PyObject *p = PyLong_FromLongLong(d);
if (!p)
return -1;
*o = p;
return 0;
}
static inline int template_callback_double(unpack_user* u, double d, msgpack_unpack_object* o)
static inline int unpack_callback_double(unpack_user* u, double d, msgpack_unpack_object* o)
{
PyObject *p = PyFloat_FromDouble(d);
if (!p)
@ -124,22 +122,26 @@ static inline int template_callback_double(unpack_user* u, double d, msgpack_unp
return 0;
}
static inline int template_callback_float(unpack_user* u, float d, msgpack_unpack_object* o)
static inline int unpack_callback_float(unpack_user* u, float d, msgpack_unpack_object* o)
{
return template_callback_double(u, d, o);
return unpack_callback_double(u, d, o);
}
static inline int template_callback_nil(unpack_user* u, msgpack_unpack_object* o)
static inline int unpack_callback_nil(unpack_user* u, msgpack_unpack_object* o)
{ Py_INCREF(Py_None); *o = Py_None; return 0; }
static inline int template_callback_true(unpack_user* u, msgpack_unpack_object* o)
static inline int unpack_callback_true(unpack_user* u, msgpack_unpack_object* o)
{ Py_INCREF(Py_True); *o = Py_True; return 0; }
static inline int template_callback_false(unpack_user* u, msgpack_unpack_object* o)
static inline int unpack_callback_false(unpack_user* u, msgpack_unpack_object* o)
{ Py_INCREF(Py_False); *o = Py_False; return 0; }
static inline int template_callback_array(unpack_user* u, unsigned int n, msgpack_unpack_object* o)
static inline int unpack_callback_array(unpack_user* u, unsigned int n, msgpack_unpack_object* o)
{
if (n > u->max_array_len) {
PyErr_Format(PyExc_ValueError, "%u exceeds max_array_len(%zd)", n, u->max_array_len);
return -1;
}
PyObject *p = u->use_list ? PyList_New(n) : PyTuple_New(n);
if (!p)
@ -148,7 +150,7 @@ static inline int template_callback_array(unpack_user* u, unsigned int n, msgpac
return 0;
}
static inline int template_callback_array_item(unpack_user* u, unsigned int current, msgpack_unpack_object* c, msgpack_unpack_object o)
static inline int unpack_callback_array_item(unpack_user* u, unsigned int current, msgpack_unpack_object* c, msgpack_unpack_object o)
{
if (u->use_list)
PyList_SET_ITEM(*c, current, o);
@ -157,28 +159,56 @@ static inline int template_callback_array_item(unpack_user* u, unsigned int curr
return 0;
}
static inline int template_callback_array_end(unpack_user* u, msgpack_unpack_object* c)
static inline int unpack_callback_array_end(unpack_user* u, msgpack_unpack_object* c)
{
if (u->list_hook) {
PyObject *arglist = Py_BuildValue("(O)", *c);
*c = PyEval_CallObject(u->list_hook, arglist);
Py_DECREF(arglist);
PyObject *new_c = PyObject_CallFunctionObjArgs(u->list_hook, *c, NULL);
if (!new_c)
return -1;
Py_DECREF(*c);
*c = new_c;
}
return 0;
}
static inline int template_callback_map(unpack_user* u, unsigned int n, msgpack_unpack_object* o)
static inline int unpack_callback_map(unpack_user* u, unsigned int n, msgpack_unpack_object* o)
{
PyObject *p = PyDict_New();
if (n > u->max_map_len) {
PyErr_Format(PyExc_ValueError, "%u exceeds max_map_len(%zd)", n, u->max_map_len);
return -1;
}
PyObject *p;
if (u->has_pairs_hook) {
p = PyList_New(n); // Or use tuple?
}
else {
p = PyDict_New();
}
if (!p)
return -1;
*o = p;
return 0;
}
static inline int template_callback_map_item(unpack_user* u, msgpack_unpack_object* c, msgpack_unpack_object k, msgpack_unpack_object v)
static inline int unpack_callback_map_item(unpack_user* u, unsigned int current, msgpack_unpack_object* c, msgpack_unpack_object k, msgpack_unpack_object v)
{
if (PyDict_SetItem(*c, k, v) == 0) {
if (u->strict_map_key && !PyUnicode_CheckExact(k) && !PyBytes_CheckExact(k)) {
PyErr_Format(PyExc_ValueError, "%.100s is not allowed for map key when strict_map_key=True", Py_TYPE(k)->tp_name);
return -1;
}
if (PyUnicode_CheckExact(k)) {
PyUnicode_InternInPlace(&k);
}
if (u->has_pairs_hook) {
msgpack_unpack_object item = PyTuple_Pack(2, k, v);
if (!item)
return -1;
Py_DECREF(k);
Py_DECREF(v);
PyList_SET_ITEM(*c, current, item);
return 0;
}
else if (PyDict_SetItem(*c, k, v) == 0) {
Py_DECREF(k);
Py_DECREF(v);
return 0;
@ -186,23 +216,171 @@ static inline int template_callback_map_item(unpack_user* u, msgpack_unpack_obje
return -1;
}
static inline int template_callback_map_end(unpack_user* u, msgpack_unpack_object* c)
static inline int unpack_callback_map_end(unpack_user* u, msgpack_unpack_object* c)
{
if (u->object_hook) {
PyObject *arglist = Py_BuildValue("(O)", *c);
*c = PyEval_CallObject(u->object_hook, arglist);
Py_DECREF(arglist);
PyObject *new_c = PyObject_CallFunctionObjArgs(u->object_hook, *c, NULL);
if (!new_c)
return -1;
Py_DECREF(*c);
*c = new_c;
}
return 0;
}
static inline int template_callback_raw(unpack_user* u, const char* b, const char* p, unsigned int l, msgpack_unpack_object* o)
static inline int unpack_callback_raw(unpack_user* u, const char* b, const char* p, unsigned int l, msgpack_unpack_object* o)
{
if (l > u->max_str_len) {
PyErr_Format(PyExc_ValueError, "%u exceeds max_str_len(%zd)", l, u->max_str_len);
return -1;
}
PyObject *py;
if(u->encoding) {
py = PyUnicode_Decode(p, l, u->encoding, u->unicode_errors);
} else {
if (u->raw) {
py = PyBytes_FromStringAndSize(p, l);
} else {
py = PyUnicode_DecodeUTF8(p, l, u->unicode_errors);
}
if (!py)
return -1;
*o = py;
return 0;
}
static inline int unpack_callback_bin(unpack_user* u, const char* b, const char* p, unsigned int l, msgpack_unpack_object* o)
{
if (l > u->max_bin_len) {
PyErr_Format(PyExc_ValueError, "%u exceeds max_bin_len(%zd)", l, u->max_bin_len);
return -1;
}
PyObject *py = PyBytes_FromStringAndSize(p, l);
if (!py)
return -1;
*o = py;
return 0;
}
typedef struct msgpack_timestamp {
int64_t tv_sec;
uint32_t tv_nsec;
} msgpack_timestamp;
/*
* Unpack ext buffer to a timestamp. Pulled from msgpack-c timestamp.h.
*/
static int unpack_timestamp(const char* buf, unsigned int buflen, msgpack_timestamp* ts) {
switch (buflen) {
case 4:
ts->tv_nsec = 0;
{
uint32_t v = _msgpack_load32(uint32_t, buf);
ts->tv_sec = (int64_t)v;
}
return 0;
case 8: {
uint64_t value =_msgpack_load64(uint64_t, buf);
ts->tv_nsec = (uint32_t)(value >> 34);
ts->tv_sec = value & 0x00000003ffffffffLL;
return 0;
}
case 12:
ts->tv_nsec = _msgpack_load32(uint32_t, buf);
ts->tv_sec = _msgpack_load64(int64_t, buf + 4);
return 0;
default:
return -1;
}
}
#include "datetime.h"
static int unpack_callback_ext(unpack_user* u, const char* base, const char* pos,
unsigned int length, msgpack_unpack_object* o)
{
int8_t typecode = (int8_t)*pos++;
if (!u->ext_hook) {
PyErr_SetString(PyExc_AssertionError, "u->ext_hook cannot be NULL");
return -1;
}
if (length-1 > u->max_ext_len) {
PyErr_Format(PyExc_ValueError, "%u exceeds max_ext_len(%zd)", length, u->max_ext_len);
return -1;
}
PyObject *py = NULL;
// length also includes the typecode, so the actual data is length-1
if (typecode == -1) {
msgpack_timestamp ts;
if (unpack_timestamp(pos, length-1, &ts) < 0) {
return -1;
}
if (u->timestamp == 2) { // int
PyObject *a = PyLong_FromLongLong(ts.tv_sec);
if (a == NULL) return -1;
PyObject *c = PyNumber_Multiply(a, u->giga);
Py_DECREF(a);
if (c == NULL) {
return -1;
}
PyObject *b = PyLong_FromUnsignedLong(ts.tv_nsec);
if (b == NULL) {
Py_DECREF(c);
return -1;
}
py = PyNumber_Add(c, b);
Py_DECREF(c);
Py_DECREF(b);
}
else if (u->timestamp == 0) { // Timestamp
py = PyObject_CallFunction(u->timestamp_t, "(Lk)", ts.tv_sec, ts.tv_nsec);
}
else if (u->timestamp == 3) { // datetime
// Calculate datetime using epoch + delta
// due to limitations PyDateTime_FromTimestamp on Windows with negative timestamps
PyObject *epoch = PyDateTimeAPI->DateTime_FromDateAndTime(1970, 1, 1, 0, 0, 0, 0, u->utc, PyDateTimeAPI->DateTimeType);
if (epoch == NULL) {
return -1;
}
PyObject* d = PyDelta_FromDSU(ts.tv_sec/(24*3600), ts.tv_sec%(24*3600), ts.tv_nsec / 1000);
if (d == NULL) {
Py_DECREF(epoch);
return -1;
}
py = PyNumber_Add(epoch, d);
Py_DECREF(epoch);
Py_DECREF(d);
}
else { // float
PyObject *a = PyFloat_FromDouble((double)ts.tv_nsec);
if (a == NULL) return -1;
PyObject *b = PyNumber_TrueDivide(a, u->giga);
Py_DECREF(a);
if (b == NULL) return -1;
PyObject *c = PyLong_FromLongLong(ts.tv_sec);
if (c == NULL) {
Py_DECREF(b);
return -1;
}
a = PyNumber_Add(b, c);
Py_DECREF(b);
Py_DECREF(c);
py = a;
}
} else {
py = PyObject_CallFunction(u->ext_hook, "(iy#)", (int)typecode, pos, (Py_ssize_t)length-1);
}
if (!py)
return -1;

View file

@ -0,0 +1,51 @@
static inline int unpack_container_header(unpack_context* ctx, const char* data, Py_ssize_t len, Py_ssize_t* off)
{
assert(len >= *off);
uint32_t size;
const unsigned char *const p = (unsigned char*)data + *off;
#define inc_offset(inc) \
if (len - *off < inc) \
return 0; \
*off += inc;
switch (*p) {
case var_offset:
inc_offset(3);
size = _msgpack_load16(uint16_t, p + 1);
break;
case var_offset + 1:
inc_offset(5);
size = _msgpack_load32(uint32_t, p + 1);
break;
#ifdef USE_CASE_RANGE
case fixed_offset + 0x0 ... fixed_offset + 0xf:
#else
case fixed_offset + 0x0:
case fixed_offset + 0x1:
case fixed_offset + 0x2:
case fixed_offset + 0x3:
case fixed_offset + 0x4:
case fixed_offset + 0x5:
case fixed_offset + 0x6:
case fixed_offset + 0x7:
case fixed_offset + 0x8:
case fixed_offset + 0x9:
case fixed_offset + 0xa:
case fixed_offset + 0xb:
case fixed_offset + 0xc:
case fixed_offset + 0xd:
case fixed_offset + 0xe:
case fixed_offset + 0xf:
#endif
++*off;
size = ((unsigned int)*p) & 0x0f;
break;
default:
PyErr_SetString(PyExc_ValueError, "Unexpected type header on stream");
return -1;
}
unpack_callback_uint32(&ctx->user, size, &ctx->stack[0].obj);
return 1;
}

View file

@ -1,7 +1,7 @@
/*
* MessagePack unpacking routine template
*
* Copyright (C) 2008-2009 FURUHASHI Sadayuki
* Copyright (C) 2008-2010 FURUHASHI Sadayuki
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -18,7 +18,8 @@
#ifndef MSGPACK_UNPACK_DEFINE_H__
#define MSGPACK_UNPACK_DEFINE_H__
#include "sysdep.h"
#include "msgpack/sysdep.h"
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <stdio.h>
@ -28,59 +29,62 @@ extern "C" {
#endif
#ifndef MSGPACK_MAX_STACK_SIZE
#define MSGPACK_MAX_STACK_SIZE 16
#ifndef MSGPACK_EMBED_STACK_SIZE
#define MSGPACK_EMBED_STACK_SIZE 32
#endif
// CS is first byte & 0x1f
typedef enum {
CS_HEADER = 0x00, // nil
CS_HEADER = 0x00, // nil
//CS_ = 0x01,
//CS_ = 0x02, // false
//CS_ = 0x03, // true
//CS_ = 0x01,
//CS_ = 0x02, // false
//CS_ = 0x03, // true
//CS_ = 0x04,
//CS_ = 0x05,
//CS_ = 0x06,
//CS_ = 0x07,
CS_BIN_8 = 0x04,
CS_BIN_16 = 0x05,
CS_BIN_32 = 0x06,
//CS_ = 0x08,
//CS_ = 0x09,
CS_FLOAT = 0x0a,
CS_DOUBLE = 0x0b,
CS_UINT_8 = 0x0c,
CS_UINT_16 = 0x0d,
CS_UINT_32 = 0x0e,
CS_UINT_64 = 0x0f,
CS_INT_8 = 0x10,
CS_INT_16 = 0x11,
CS_INT_32 = 0x12,
CS_INT_64 = 0x13,
CS_EXT_8 = 0x07,
CS_EXT_16 = 0x08,
CS_EXT_32 = 0x09,
//CS_ = 0x14,
//CS_ = 0x15,
//CS_BIG_INT_16 = 0x16,
//CS_BIG_INT_32 = 0x17,
//CS_BIG_FLOAT_16 = 0x18,
//CS_BIG_FLOAT_32 = 0x19,
CS_RAW_16 = 0x1a,
CS_RAW_32 = 0x1b,
CS_ARRAY_16 = 0x1c,
CS_ARRAY_32 = 0x1d,
CS_MAP_16 = 0x1e,
CS_MAP_32 = 0x1f,
CS_FLOAT = 0x0a,
CS_DOUBLE = 0x0b,
CS_UINT_8 = 0x0c,
CS_UINT_16 = 0x0d,
CS_UINT_32 = 0x0e,
CS_UINT_64 = 0x0f,
CS_INT_8 = 0x10,
CS_INT_16 = 0x11,
CS_INT_32 = 0x12,
CS_INT_64 = 0x13,
//ACS_BIG_INT_VALUE,
//ACS_BIG_FLOAT_VALUE,
ACS_RAW_VALUE,
//CS_FIXEXT1 = 0x14,
//CS_FIXEXT2 = 0x15,
//CS_FIXEXT4 = 0x16,
//CS_FIXEXT8 = 0x17,
//CS_FIXEXT16 = 0x18,
CS_RAW_8 = 0x19,
CS_RAW_16 = 0x1a,
CS_RAW_32 = 0x1b,
CS_ARRAY_16 = 0x1c,
CS_ARRAY_32 = 0x1d,
CS_MAP_16 = 0x1e,
CS_MAP_32 = 0x1f,
ACS_RAW_VALUE,
ACS_BIN_VALUE,
ACS_EXT_VALUE,
} msgpack_unpack_state;
typedef enum {
CT_ARRAY_ITEM,
CT_MAP_KEY,
CT_MAP_VALUE,
CT_ARRAY_ITEM,
CT_MAP_KEY,
CT_MAP_VALUE,
} msgpack_container_type;
@ -89,4 +93,3 @@ typedef enum {
#endif
#endif /* msgpack/unpack_define.h */

View file

@ -1,7 +1,7 @@
/*
* MessagePack unpacking routine template
*
* Copyright (C) 2008-2009 FURUHASHI Sadayuki
* Copyright (C) 2008-2010 FURUHASHI Sadayuki
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -16,130 +16,124 @@
* limitations under the License.
*/
#ifndef msgpack_unpack_func
#error msgpack_unpack_func template is not defined
#endif
#ifndef msgpack_unpack_callback
#error msgpack_unpack_callback template is not defined
#endif
#ifndef msgpack_unpack_struct
#error msgpack_unpack_struct template is not defined
#endif
#ifndef msgpack_unpack_struct_decl
#define msgpack_unpack_struct_decl(name) msgpack_unpack_struct(name)
#endif
#ifndef msgpack_unpack_object
#error msgpack_unpack_object type is not defined
#endif
#ifndef msgpack_unpack_user
#error msgpack_unpack_user type is not defined
#endif
#ifndef USE_CASE_RANGE
#if !defined(_MSC_VER)
#define USE_CASE_RANGE
#endif
#endif
msgpack_unpack_struct_decl(_stack) {
msgpack_unpack_object obj;
size_t count;
unsigned int ct;
typedef struct unpack_stack {
PyObject* obj;
Py_ssize_t size;
Py_ssize_t count;
unsigned int ct;
PyObject* map_key;
} unpack_stack;
union {
size_t curr;
msgpack_unpack_object map_key;
};
};
msgpack_unpack_struct_decl(_context) {
msgpack_unpack_user user;
unsigned int cs;
unsigned int trail;
unsigned int top;
msgpack_unpack_struct(_stack) stack[MSGPACK_MAX_STACK_SIZE];
struct unpack_context {
unpack_user user;
unsigned int cs;
unsigned int trail;
unsigned int top;
/*
unpack_stack* stack;
unsigned int stack_size;
unpack_stack embed_stack[MSGPACK_EMBED_STACK_SIZE];
*/
unpack_stack stack[MSGPACK_EMBED_STACK_SIZE];
};
msgpack_unpack_func(void, _init)(msgpack_unpack_struct(_context)* ctx)
static inline void unpack_init(unpack_context* ctx)
{
ctx->cs = CS_HEADER;
ctx->trail = 0;
ctx->top = 0;
ctx->stack[0].obj = msgpack_unpack_callback(_root)(&ctx->user);
ctx->cs = CS_HEADER;
ctx->trail = 0;
ctx->top = 0;
/*
ctx->stack = ctx->embed_stack;
ctx->stack_size = MSGPACK_EMBED_STACK_SIZE;
*/
ctx->stack[0].obj = unpack_callback_root(&ctx->user);
}
msgpack_unpack_func(msgpack_unpack_object, _data)(msgpack_unpack_struct(_context)* ctx)
/*
static inline void unpack_destroy(unpack_context* ctx)
{
return (ctx)->stack[0].obj;
if(ctx->stack_size != MSGPACK_EMBED_STACK_SIZE) {
free(ctx->stack);
}
}
*/
static inline PyObject* unpack_data(unpack_context* ctx)
{
return (ctx)->stack[0].obj;
}
msgpack_unpack_func(int, _execute)(msgpack_unpack_struct(_context)* ctx, const char* data, size_t len, size_t* off)
static inline void unpack_clear(unpack_context *ctx)
{
const unsigned char* p = (unsigned char*)data + *off;
const unsigned char* const pe = (unsigned char*)data + len;
const void* n = NULL;
Py_CLEAR(ctx->stack[0].obj);
}
unsigned int trail = ctx->trail;
unsigned int cs = ctx->cs;
unsigned int top = ctx->top;
static inline int unpack_execute(bool construct, unpack_context* ctx, const char* data, Py_ssize_t len, Py_ssize_t* off)
{
assert(len >= *off);
msgpack_unpack_struct(_stack)* stack = ctx->stack;
msgpack_unpack_user* user = &ctx->user;
const unsigned char* p = (unsigned char*)data + *off;
const unsigned char* const pe = (unsigned char*)data + len;
const void* n = p;
msgpack_unpack_object obj;
msgpack_unpack_struct(_stack)* c = NULL;
unsigned int trail = ctx->trail;
unsigned int cs = ctx->cs;
unsigned int top = ctx->top;
unpack_stack* stack = ctx->stack;
/*
unsigned int stack_size = ctx->stack_size;
*/
unpack_user* user = &ctx->user;
int ret;
PyObject* obj = NULL;
unpack_stack* c = NULL;
assert(len >= *off);
int ret;
#define construct_cb(name) \
construct && unpack_callback ## name
#define push_simple_value(func) \
if(msgpack_unpack_callback(func)(user, &obj) < 0) { goto _failed; } \
goto _push
if(construct_cb(func)(user, &obj) < 0) { goto _failed; } \
goto _push
#define push_fixed_value(func, arg) \
if(msgpack_unpack_callback(func)(user, arg, &obj) < 0) { goto _failed; } \
goto _push
if(construct_cb(func)(user, arg, &obj) < 0) { goto _failed; } \
goto _push
#define push_variable_value(func, base, pos, len) \
if(msgpack_unpack_callback(func)(user, \
(const char*)base, (const char*)pos, len, &obj) < 0) { goto _failed; } \
goto _push
if(construct_cb(func)(user, \
(const char*)base, (const char*)pos, len, &obj) < 0) { goto _failed; } \
goto _push
#define again_fixed_trail(_cs, trail_len) \
trail = trail_len; \
cs = _cs; \
goto _fixed_trail_again
trail = trail_len; \
cs = _cs; \
goto _fixed_trail_again
#define again_fixed_trail_if_zero(_cs, trail_len, ifzero) \
trail = trail_len; \
if(trail == 0) { goto ifzero; } \
cs = _cs; \
goto _fixed_trail_again
trail = trail_len; \
if(trail == 0) { goto ifzero; } \
cs = _cs; \
goto _fixed_trail_again
#define start_container(func, count_, ct_) \
if(msgpack_unpack_callback(func)(user, count_, &stack[top].obj) < 0) { goto _failed; } \
if((count_) == 0) { obj = stack[top].obj; goto _push; } \
if(top >= MSGPACK_MAX_STACK_SIZE) { goto _failed; } \
stack[top].ct = ct_; \
stack[top].curr = 0; \
stack[top].count = count_; \
/*printf("container %d count %d stack %d\n",stack[top].obj,count_,top);*/ \
/*printf("stack push %d\n", top);*/ \
++top; \
goto _header_again
if(top >= MSGPACK_EMBED_STACK_SIZE) { ret = -3; goto _end; } \
if(construct_cb(func)(user, count_, &stack[top].obj) < 0) { goto _failed; } \
if((count_) == 0) { obj = stack[top].obj; \
if (construct_cb(func##_end)(user, &obj) < 0) { goto _failed; } \
goto _push; } \
stack[top].ct = ct_; \
stack[top].size = count_; \
stack[top].count = 0; \
++top; \
goto _header_again
#define NEXT_CS(p) \
((unsigned int)*p & 0x1f)
#define PTR_CAST_8(ptr) (*(uint8_t*)ptr)
#define PTR_CAST_16(ptr) _msgpack_be16(*(uint16_t*)ptr)
#define PTR_CAST_32(ptr) _msgpack_be32(*(uint32_t*)ptr)
#define PTR_CAST_64(ptr) _msgpack_be64(*(uint64_t*)ptr)
#define NEXT_CS(p) ((unsigned int)*p & 0x1f)
#ifdef USE_CASE_RANGE
#define SWITCH_RANGE_BEGIN switch(*p) {
@ -153,224 +147,249 @@ msgpack_unpack_func(int, _execute)(msgpack_unpack_struct(_context)* ctx, const c
#define SWITCH_RANGE_END } }
#endif
if(p == pe) { goto _out; }
do {
switch(cs) {
case CS_HEADER:
SWITCH_RANGE_BEGIN
SWITCH_RANGE(0x00, 0x7f) // Positive Fixnum
push_fixed_value(_uint8, *(uint8_t*)p);
SWITCH_RANGE(0xe0, 0xff) // Negative Fixnum
push_fixed_value(_int8, *(int8_t*)p);
SWITCH_RANGE(0xc0, 0xdf) // Variable
switch(*p) {
case 0xc0: // nil
push_simple_value(_nil);
//case 0xc1: // string
// again_terminal_trail(NEXT_CS(p), p+1);
case 0xc2: // false
push_simple_value(_false);
case 0xc3: // true
push_simple_value(_true);
//case 0xc4:
//case 0xc5:
//case 0xc6:
//case 0xc7:
//case 0xc8:
//case 0xc9:
case 0xca: // float
case 0xcb: // double
case 0xcc: // unsigned int 8
case 0xcd: // unsigned int 16
case 0xce: // unsigned int 32
case 0xcf: // unsigned int 64
case 0xd0: // signed int 8
case 0xd1: // signed int 16
case 0xd2: // signed int 32
case 0xd3: // signed int 64
again_fixed_trail(NEXT_CS(p), 1 << (((unsigned int)*p) & 0x03));
//case 0xd4:
//case 0xd5:
//case 0xd6: // big integer 16
//case 0xd7: // big integer 32
//case 0xd8: // big float 16
//case 0xd9: // big float 32
case 0xda: // raw 16
case 0xdb: // raw 32
case 0xdc: // array 16
case 0xdd: // array 32
case 0xde: // map 16
case 0xdf: // map 32
again_fixed_trail(NEXT_CS(p), 2 << (((unsigned int)*p) & 0x01));
default:
goto _failed;
}
SWITCH_RANGE(0xa0, 0xbf) // FixRaw
again_fixed_trail_if_zero(ACS_RAW_VALUE, ((unsigned int)*p & 0x1f), _raw_zero);
SWITCH_RANGE(0x90, 0x9f) // FixArray
start_container(_array, ((unsigned int)*p) & 0x0f, CT_ARRAY_ITEM);
SWITCH_RANGE(0x80, 0x8f) // FixMap
start_container(_map, ((unsigned int)*p) & 0x0f, CT_MAP_KEY);
if(p == pe) { goto _out; }
do {
switch(cs) {
case CS_HEADER:
SWITCH_RANGE_BEGIN
SWITCH_RANGE(0x00, 0x7f) // Positive Fixnum
push_fixed_value(_uint8, *(uint8_t*)p);
SWITCH_RANGE(0xe0, 0xff) // Negative Fixnum
push_fixed_value(_int8, *(int8_t*)p);
SWITCH_RANGE(0xc0, 0xdf) // Variable
switch(*p) {
case 0xc0: // nil
push_simple_value(_nil);
//case 0xc1: // never used
case 0xc2: // false
push_simple_value(_false);
case 0xc3: // true
push_simple_value(_true);
case 0xc4: // bin 8
again_fixed_trail(NEXT_CS(p), 1);
case 0xc5: // bin 16
again_fixed_trail(NEXT_CS(p), 2);
case 0xc6: // bin 32
again_fixed_trail(NEXT_CS(p), 4);
case 0xc7: // ext 8
again_fixed_trail(NEXT_CS(p), 1);
case 0xc8: // ext 16
again_fixed_trail(NEXT_CS(p), 2);
case 0xc9: // ext 32
again_fixed_trail(NEXT_CS(p), 4);
case 0xca: // float
case 0xcb: // double
case 0xcc: // unsigned int 8
case 0xcd: // unsigned int 16
case 0xce: // unsigned int 32
case 0xcf: // unsigned int 64
case 0xd0: // signed int 8
case 0xd1: // signed int 16
case 0xd2: // signed int 32
case 0xd3: // signed int 64
again_fixed_trail(NEXT_CS(p), 1 << (((unsigned int)*p) & 0x03));
case 0xd4: // fixext 1
case 0xd5: // fixext 2
case 0xd6: // fixext 4
case 0xd7: // fixext 8
again_fixed_trail_if_zero(ACS_EXT_VALUE,
(1 << (((unsigned int)*p) & 0x03))+1,
_ext_zero);
case 0xd8: // fixext 16
again_fixed_trail_if_zero(ACS_EXT_VALUE, 16+1, _ext_zero);
case 0xd9: // str 8
again_fixed_trail(NEXT_CS(p), 1);
case 0xda: // raw 16
case 0xdb: // raw 32
case 0xdc: // array 16
case 0xdd: // array 32
case 0xde: // map 16
case 0xdf: // map 32
again_fixed_trail(NEXT_CS(p), 2 << (((unsigned int)*p) & 0x01));
default:
ret = -2;
goto _end;
}
SWITCH_RANGE(0xa0, 0xbf) // FixRaw
again_fixed_trail_if_zero(ACS_RAW_VALUE, ((unsigned int)*p & 0x1f), _raw_zero);
SWITCH_RANGE(0x90, 0x9f) // FixArray
start_container(_array, ((unsigned int)*p) & 0x0f, CT_ARRAY_ITEM);
SWITCH_RANGE(0x80, 0x8f) // FixMap
start_container(_map, ((unsigned int)*p) & 0x0f, CT_MAP_KEY);
SWITCH_RANGE_DEFAULT
goto _failed;
SWITCH_RANGE_END
// end CS_HEADER
SWITCH_RANGE_DEFAULT
ret = -2;
goto _end;
SWITCH_RANGE_END
// end CS_HEADER
_fixed_trail_again:
++p;
_fixed_trail_again:
++p;
default:
if((size_t)(pe - p) < trail) { goto _out; }
n = p; p += trail - 1;
switch(cs) {
//case CS_
//case CS_
case CS_FLOAT: {
union { uint32_t num; char buf[4]; } f;
f.num = PTR_CAST_32(n); // FIXME
push_fixed_value(_float, *((float*)f.buf)); }
case CS_DOUBLE: {
union { uint64_t num; char buf[8]; } f;
f.num = PTR_CAST_64(n); // FIXME
push_fixed_value(_double, *((double*)f.buf)); }
case CS_UINT_8:
push_fixed_value(_uint8, (uint8_t)PTR_CAST_8(n));
case CS_UINT_16:
push_fixed_value(_uint16, (uint16_t)PTR_CAST_16(n));
case CS_UINT_32:
push_fixed_value(_uint32, (uint32_t)PTR_CAST_32(n));
case CS_UINT_64:
push_fixed_value(_uint64, (uint64_t)PTR_CAST_64(n));
default:
if((size_t)(pe - p) < trail) { goto _out; }
n = p; p += trail - 1;
switch(cs) {
case CS_EXT_8:
again_fixed_trail_if_zero(ACS_EXT_VALUE, *(uint8_t*)n+1, _ext_zero);
case CS_EXT_16:
again_fixed_trail_if_zero(ACS_EXT_VALUE,
_msgpack_load16(uint16_t,n)+1,
_ext_zero);
case CS_EXT_32:
again_fixed_trail_if_zero(ACS_EXT_VALUE,
_msgpack_load32(uint32_t,n)+1,
_ext_zero);
case CS_FLOAT: {
double f;
#if PY_VERSION_HEX >= 0x030B00A7
f = PyFloat_Unpack4((const char*)n, 0);
#else
f = _PyFloat_Unpack4((unsigned char*)n, 0);
#endif
push_fixed_value(_float, f); }
case CS_DOUBLE: {
double f;
#if PY_VERSION_HEX >= 0x030B00A7
f = PyFloat_Unpack8((const char*)n, 0);
#else
f = _PyFloat_Unpack8((unsigned char*)n, 0);
#endif
push_fixed_value(_double, f); }
case CS_UINT_8:
push_fixed_value(_uint8, *(uint8_t*)n);
case CS_UINT_16:
push_fixed_value(_uint16, _msgpack_load16(uint16_t,n));
case CS_UINT_32:
push_fixed_value(_uint32, _msgpack_load32(uint32_t,n));
case CS_UINT_64:
push_fixed_value(_uint64, _msgpack_load64(uint64_t,n));
case CS_INT_8:
push_fixed_value(_int8, (int8_t)PTR_CAST_8(n));
case CS_INT_16:
push_fixed_value(_int16, (int16_t)PTR_CAST_16(n));
case CS_INT_32:
push_fixed_value(_int32, (int32_t)PTR_CAST_32(n));
case CS_INT_64:
push_fixed_value(_int64, (int64_t)PTR_CAST_64(n));
case CS_INT_8:
push_fixed_value(_int8, *(int8_t*)n);
case CS_INT_16:
push_fixed_value(_int16, _msgpack_load16(int16_t,n));
case CS_INT_32:
push_fixed_value(_int32, _msgpack_load32(int32_t,n));
case CS_INT_64:
push_fixed_value(_int64, _msgpack_load64(int64_t,n));
//case CS_
//case CS_
//case CS_BIG_INT_16:
// again_fixed_trail_if_zero(ACS_BIG_INT_VALUE, (uint16_t)PTR_CAST_16(n), _big_int_zero);
//case CS_BIG_INT_32:
// again_fixed_trail_if_zero(ACS_BIG_INT_VALUE, (uint32_t)PTR_CAST_32(n), _big_int_zero);
//case ACS_BIG_INT_VALUE:
//_big_int_zero:
// // FIXME
// push_variable_value(_big_int, data, n, trail);
case CS_BIN_8:
again_fixed_trail_if_zero(ACS_BIN_VALUE, *(uint8_t*)n, _bin_zero);
case CS_BIN_16:
again_fixed_trail_if_zero(ACS_BIN_VALUE, _msgpack_load16(uint16_t,n), _bin_zero);
case CS_BIN_32:
again_fixed_trail_if_zero(ACS_BIN_VALUE, _msgpack_load32(uint32_t,n), _bin_zero);
case ACS_BIN_VALUE:
_bin_zero:
push_variable_value(_bin, data, n, trail);
//case CS_BIG_FLOAT_16:
// again_fixed_trail_if_zero(ACS_BIG_FLOAT_VALUE, (uint16_t)PTR_CAST_16(n), _big_float_zero);
//case CS_BIG_FLOAT_32:
// again_fixed_trail_if_zero(ACS_BIG_FLOAT_VALUE, (uint32_t)PTR_CAST_32(n), _big_float_zero);
//case ACS_BIG_FLOAT_VALUE:
//_big_float_zero:
// // FIXME
// push_variable_value(_big_float, data, n, trail);
case CS_RAW_8:
again_fixed_trail_if_zero(ACS_RAW_VALUE, *(uint8_t*)n, _raw_zero);
case CS_RAW_16:
again_fixed_trail_if_zero(ACS_RAW_VALUE, _msgpack_load16(uint16_t,n), _raw_zero);
case CS_RAW_32:
again_fixed_trail_if_zero(ACS_RAW_VALUE, _msgpack_load32(uint32_t,n), _raw_zero);
case ACS_RAW_VALUE:
_raw_zero:
push_variable_value(_raw, data, n, trail);
case CS_RAW_16:
again_fixed_trail_if_zero(ACS_RAW_VALUE, (uint16_t)PTR_CAST_16(n), _raw_zero);
case CS_RAW_32:
again_fixed_trail_if_zero(ACS_RAW_VALUE, (uint32_t)PTR_CAST_32(n), _raw_zero);
case ACS_RAW_VALUE:
_raw_zero:
push_variable_value(_raw, data, n, trail);
case ACS_EXT_VALUE:
_ext_zero:
push_variable_value(_ext, data, n, trail);
case CS_ARRAY_16:
start_container(_array, (uint16_t)PTR_CAST_16(n), CT_ARRAY_ITEM);
case CS_ARRAY_32:
/* FIXME security guard */
start_container(_array, (uint32_t)PTR_CAST_32(n), CT_ARRAY_ITEM);
case CS_ARRAY_16:
start_container(_array, _msgpack_load16(uint16_t,n), CT_ARRAY_ITEM);
case CS_ARRAY_32:
/* FIXME security guard */
start_container(_array, _msgpack_load32(uint32_t,n), CT_ARRAY_ITEM);
case CS_MAP_16:
start_container(_map, (uint16_t)PTR_CAST_16(n), CT_MAP_KEY);
case CS_MAP_32:
/* FIXME security guard */
start_container(_map, (uint32_t)PTR_CAST_32(n), CT_MAP_KEY);
case CS_MAP_16:
start_container(_map, _msgpack_load16(uint16_t,n), CT_MAP_KEY);
case CS_MAP_32:
/* FIXME security guard */
start_container(_map, _msgpack_load32(uint32_t,n), CT_MAP_KEY);
default:
goto _failed;
}
}
default:
goto _failed;
}
}
_push:
if(top == 0) { goto _finish; }
c = &stack[top-1];
switch(c->ct) {
case CT_ARRAY_ITEM:
if(msgpack_unpack_callback(_array_item)(user, c->curr, &c->obj, obj) < 0) { goto _failed; }
if(++c->curr == c->count) {
msgpack_unpack_callback(_array_end)(user, &c->obj);
obj = c->obj;
--top;
/*printf("stack pop %d\n", top);*/
goto _push;
}
goto _header_again;
case CT_MAP_KEY:
c->map_key = obj;
c->ct = CT_MAP_VALUE;
goto _header_again;
case CT_MAP_VALUE:
if(msgpack_unpack_callback(_map_item)(user, &c->obj, c->map_key, obj) < 0) { goto _failed; }
if(--c->count == 0) {
msgpack_unpack_callback(_map_end)(user, &c->obj);
obj = c->obj;
--top;
/*printf("stack pop %d\n", top);*/
goto _push;
}
c->ct = CT_MAP_KEY;
goto _header_again;
if(top == 0) { goto _finish; }
c = &stack[top-1];
switch(c->ct) {
case CT_ARRAY_ITEM:
if(construct_cb(_array_item)(user, c->count, &c->obj, obj) < 0) { goto _failed; }
if(++c->count == c->size) {
obj = c->obj;
if (construct_cb(_array_end)(user, &obj) < 0) { goto _failed; }
--top;
/*printf("stack pop %d\n", top);*/
goto _push;
}
goto _header_again;
case CT_MAP_KEY:
c->map_key = obj;
c->ct = CT_MAP_VALUE;
goto _header_again;
case CT_MAP_VALUE:
if(construct_cb(_map_item)(user, c->count, &c->obj, c->map_key, obj) < 0) { goto _failed; }
if(++c->count == c->size) {
obj = c->obj;
if (construct_cb(_map_end)(user, &obj) < 0) { goto _failed; }
--top;
/*printf("stack pop %d\n", top);*/
goto _push;
}
c->ct = CT_MAP_KEY;
goto _header_again;
default:
goto _failed;
}
default:
goto _failed;
}
_header_again:
cs = CS_HEADER;
++p;
} while(p != pe);
goto _out;
cs = CS_HEADER;
++p;
} while(p != pe);
goto _out;
_finish:
stack[0].obj = obj;
++p;
ret = 1;
/*printf("-- finish --\n"); */
goto _end;
if (!construct)
unpack_callback_nil(user, &obj);
stack[0].obj = obj;
++p;
ret = 1;
/*printf("-- finish --\n"); */
goto _end;
_failed:
/*printf("** FAILED **\n"); */
ret = -1;
goto _end;
/*printf("** FAILED **\n"); */
ret = -1;
goto _end;
_out:
ret = 0;
goto _end;
ret = 0;
goto _end;
_end:
ctx->cs = cs;
ctx->trail = trail;
ctx->top = top;
*off = p - (const unsigned char*)data;
ctx->cs = cs;
ctx->trail = trail;
ctx->top = top;
*off = p - (const unsigned char*)data;
return ret;
return ret;
#undef construct_cb
}
#undef msgpack_unpack_func
#undef msgpack_unpack_callback
#undef msgpack_unpack_struct
#undef msgpack_unpack_object
#undef msgpack_unpack_user
#undef NEXT_CS
#undef SWITCH_RANGE_BEGIN
#undef SWITCH_RANGE
#undef SWITCH_RANGE_DEFAULT
#undef SWITCH_RANGE_END
#undef push_simple_value
#undef push_fixed_value
#undef push_variable_value
@ -378,9 +397,27 @@ _end:
#undef again_fixed_trail_if_zero
#undef start_container
#undef NEXT_CS
#undef PTR_CAST_8
#undef PTR_CAST_16
#undef PTR_CAST_32
#undef PTR_CAST_64
static int unpack_construct(unpack_context *ctx, const char *data, Py_ssize_t len, Py_ssize_t *off) {
return unpack_execute(1, ctx, data, len, off);
}
static int unpack_skip(unpack_context *ctx, const char *data, Py_ssize_t len, Py_ssize_t *off) {
return unpack_execute(0, ctx, data, len, off);
}
#define unpack_container_header read_array_header
#define fixed_offset 0x90
#define var_offset 0xdc
#include "unpack_container_header.h"
#undef unpack_container_header
#undef fixed_offset
#undef var_offset
#define unpack_container_header read_map_header
#define fixed_offset 0x80
#define var_offset 0xde
#include "unpack_container_header.h"
#undef unpack_container_header
#undef fixed_offset
#undef var_offset
/* vim: set ts=4 sw=4 sts=4 expandtab */

45
pyproject.toml Normal file
View file

@ -0,0 +1,45 @@
[build-system]
requires = ["setuptools >= 78.1.1"]
build-backend = "setuptools.build_meta"
[project]
name = "msgpack"
dynamic = ["version"]
license = "Apache-2.0"
authors = [{name="Inada Naoki", email="songofacandy@gmail.com"}]
description = "MessagePack serializer"
readme = "README.md"
keywords = ["msgpack", "messagepack", "serializer", "serialization", "binary"]
requires-python = ">=3.10"
classifiers = [
"Development Status :: 5 - Production/Stable",
"Operating System :: OS Independent",
"Topic :: File Formats",
"Intended Audience :: Developers",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
]
[project.urls]
Homepage = "https://msgpack.org/"
Documentation = "https://msgpack-python.readthedocs.io/"
Repository = "https://github.com/msgpack/msgpack-python/"
Tracker = "https://github.com/msgpack/msgpack-python/issues"
Changelog = "https://github.com/msgpack/msgpack-python/blob/main/ChangeLog.rst"
[tool.setuptools]
# Do not install C/C++/Cython source files
include-package-data = false
[tool.setuptools.dynamic]
version = {attr = "msgpack.__version__"}
[tool.ruff]
line-length = 100
target-version = "py310"
lint.select = [
"E", # pycodestyle
"F", # Pyflakes
"I", # isort
#"UP", pyupgrade
]

3
requirements.txt Normal file
View file

@ -0,0 +1,3 @@
Cython==3.2.1
setuptools==78.1.1
build

98
setup.py Executable file → Normal file
View file

@ -1,84 +1,32 @@
#!/usr/bin/env python
# coding: utf-8
version = (0, 1, 12)
import os
import sys
from glob import glob
from distutils.core import setup, Extension
from distutils.command.sdist import sdist
try:
from Cython.Distutils import build_ext
import Cython.Compiler.Main as cython_compiler
have_cython = True
except ImportError:
from distutils.command.build_ext import build_ext
have_cython = False
from setuptools import Extension, setup
# make msgpack/__verison__.py
f = open('msgpack/__version__.py', 'w')
f.write("version = %r\n" % (version,))
f.close()
del f
PYPY = hasattr(sys, "pypy_version_info")
version_str = '.'.join(str(x) for x in version[:3])
if len(version) > 3 and version[3] != 'final':
version_str += version[3]
libraries = []
macros = []
ext_modules = []
# take care of extension modules.
if have_cython:
sources = ['msgpack/_msgpack.pyx']
if sys.platform == "win32":
libraries.append("ws2_32")
macros = [("__LITTLE_ENDIAN__", "1")]
class Sdist(sdist):
def __init__(self, *args, **kwargs):
for src in glob('msgpack/*.pyx'):
cython_compiler.compile(glob('msgpack/*.pyx'),
cython_compiler.default_options)
sdist.__init__(self, *args, **kwargs)
else:
sources = ['msgpack/_msgpack.c']
if not PYPY and not os.environ.get("MSGPACK_PUREPYTHON"):
ext_modules.append(
Extension(
"msgpack._cmsgpack",
sources=["msgpack/_cmsgpack.c"],
libraries=libraries,
include_dirs=["."],
define_macros=macros,
)
)
del libraries, macros
for f in sources:
if not os.path.exists(f):
raise ImportError("Building msgpack from VCS needs Cython. Install Cython or use sdist package.")
Sdist = sdist
libraries = ['ws2_32'] if sys.platform == 'win32' else []
msgpack_mod = Extension('msgpack._msgpack',
sources=sources,
libraries=libraries,
)
del sources, libraries
desc = 'MessagePack (de)serializer.'
long_desc = """MessagePack (de)serializer for Python.
What's MessagePack? (from http://msgpack.org/)
MessagePack is a binary-based efficient data interchange format that is
focused on high performance. It is like JSON, but very fast and small.
"""
setup(name='msgpack-python',
author='INADA Naoki',
author_email='songofacandy@gmail.com',
version=version_str,
cmdclass={'build_ext': build_ext, 'sdist': Sdist},
ext_modules=[msgpack_mod],
packages=['msgpack'],
description=desc,
long_description=long_desc,
url='http://msgpack.org/',
download_url='http://pypi.python.org/pypi/msgpack/',
classifiers=[
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 3',
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: Apache Software License',
]
)
setup(
ext_modules=ext_modules,
packages=["msgpack"],
)

View file

@ -1,16 +1,49 @@
#!/usr/bin/env python
# coding: utf-8
from pytest import raises
from msgpack import Packer, packb, unpackb
from nose import main
from nose.tools import *
from msgpack import packb, unpackb
def test_unpack_buffer():
from array import array
buf = array('c')
buf.fromstring(packb(('foo', 'bar')))
obj = unpackb(buf)
assert_equal(('foo', 'bar'), obj)
if __name__ == '__main__':
main()
buf = array("b")
buf.frombytes(packb((b"foo", b"bar")))
obj = unpackb(buf, use_list=1)
assert [b"foo", b"bar"] == obj
def test_unpack_bytearray():
buf = bytearray(packb((b"foo", b"bar")))
obj = unpackb(buf, use_list=1)
assert [b"foo", b"bar"] == obj
expected_type = bytes
assert all(type(s) is expected_type for s in obj)
def test_unpack_memoryview():
buf = bytearray(packb((b"foo", b"bar")))
view = memoryview(buf)
obj = unpackb(view, use_list=1)
assert [b"foo", b"bar"] == obj
expected_type = bytes
assert all(type(s) is expected_type for s in obj)
def test_packer_getbuffer():
packer = Packer(autoreset=False)
packer.pack_array_header(2)
packer.pack(42)
packer.pack("hello")
buffer = packer.getbuffer()
assert isinstance(buffer, memoryview)
assert bytes(buffer) == b"\x92*\xa5hello"
if Packer.__module__ == "msgpack._cmsgpack": # only for Cython
# cython Packer supports buffer protocol directly
assert bytes(packer) == b"\x92*\xa5hello"
with raises(BufferError):
packer.pack(42)
buffer.release()
packer.pack(42)
assert bytes(packer) == b"\x92*\xa5hello*"

View file

@ -1,105 +1,136 @@
#!/usr/bin/env python
# coding: utf-8
from nose import main
from nose.tools import *
from msgpack import packs, unpacks
from msgpack import packb, unpackb
def check(length, obj):
v = packs(obj)
assert_equal(len(v), length, "%r length should be %r but get %r" % (obj, length, len(v)))
assert_equal(unpacks(v), obj)
def check(length, obj, use_bin_type=True):
v = packb(obj, use_bin_type=use_bin_type)
assert len(v) == length, f"{obj!r} length should be {length!r} but get {len(v)!r}"
assert unpackb(v, use_list=0, raw=not use_bin_type) == obj
def test_1():
for o in [None, True, False, 0, 1, (1 << 6), (1 << 7) - 1, -1,
-((1<<5)-1), -(1<<5)]:
for o in [
None,
True,
False,
0,
1,
(1 << 6),
(1 << 7) - 1,
-1,
-((1 << 5) - 1),
-(1 << 5),
]:
check(1, o)
def test_2():
for o in [1 << 7, (1 << 8) - 1,
-((1<<5)+1), -(1<<7)
]:
for o in [1 << 7, (1 << 8) - 1, -((1 << 5) + 1), -(1 << 7)]:
check(2, o)
def test_3():
for o in [1 << 8, (1 << 16) - 1,
-((1<<7)+1), -(1<<15)]:
for o in [1 << 8, (1 << 16) - 1, -((1 << 7) + 1), -(1 << 15)]:
check(3, o)
def test_5():
for o in [1 << 16, (1 << 32) - 1,
-((1<<15)+1), -(1<<31)]:
for o in [1 << 16, (1 << 32) - 1, -((1 << 15) + 1), -(1 << 31)]:
check(5, o)
def test_9():
for o in [1 << 32, (1 << 64) - 1,
-((1<<31)+1), -(1<<63),
1.0, 0.1, -0.1, -1.0]:
for o in [
1 << 32,
(1 << 64) - 1,
-((1 << 31) + 1),
-(1 << 63),
1.0,
0.1,
-0.1,
-1.0,
]:
check(9, o)
def check_raw(overhead, num):
check(num + overhead, " " * num)
check(num + overhead, b" " * num, use_bin_type=False)
def test_fixraw():
check_raw(1, 0)
check_raw(1, (1<<5) - 1)
check_raw(1, (1 << 5) - 1)
def test_raw16():
check_raw(3, 1<<5)
check_raw(3, (1<<16) - 1)
check_raw(3, 1 << 5)
check_raw(3, (1 << 16) - 1)
def test_raw32():
check_raw(5, 1<<16)
check_raw(5, 1 << 16)
def check_array(overhead, num):
check(num + overhead, (None,) * num)
def test_fixarray():
check_array(1, 0)
check_array(1, (1 << 4) - 1)
def test_array16():
check_array(3, 1 << 4)
check_array(3, (1<<16)-1)
check_array(3, (1 << 16) - 1)
def test_array32():
check_array(5, (1<<16))
check_array(5, (1 << 16))
def match(obj, buf):
assert_equal(packs(obj), buf)
assert_equal(unpacks(buf), obj)
assert packb(obj) == buf
assert unpackb(buf, use_list=0, strict_map_key=False) == obj
def test_match():
cases = [
(None, '\xc0'),
(False, '\xc2'),
(True, '\xc3'),
(0, '\x00'),
(127, '\x7f'),
(128, '\xcc\x80'),
(256, '\xcd\x01\x00'),
(-1, '\xff'),
(-33, '\xd0\xdf'),
(-129, '\xd1\xff\x7f'),
({1:1}, '\x81\x01\x01'),
(1.0, "\xcb\x3f\xf0\x00\x00\x00\x00\x00\x00"),
((), '\x90'),
(tuple(range(15)),"\x9f\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e"),
(tuple(range(16)),"\xdc\x00\x10\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"),
({}, '\x80'),
(dict([(x,x) for x in range(15)]), '\x8f\x00\x00\x01\x01\x02\x02\x03\x03\x04\x04\x05\x05\x06\x06\x07\x07\x08\x08\t\t\n\n\x0b\x0b\x0c\x0c\r\r\x0e\x0e'),
(dict([(x,x) for x in range(16)]), '\xde\x00\x10\x00\x00\x01\x01\x02\x02\x03\x03\x04\x04\x05\x05\x06\x06\x07\x07\x08\x08\t\t\n\n\x0b\x0b\x0c\x0c\r\r\x0e\x0e\x0f\x0f'),
]
(None, b"\xc0"),
(False, b"\xc2"),
(True, b"\xc3"),
(0, b"\x00"),
(127, b"\x7f"),
(128, b"\xcc\x80"),
(256, b"\xcd\x01\x00"),
(-1, b"\xff"),
(-33, b"\xd0\xdf"),
(-129, b"\xd1\xff\x7f"),
({1: 1}, b"\x81\x01\x01"),
(1.0, b"\xcb\x3f\xf0\x00\x00\x00\x00\x00\x00"),
((), b"\x90"),
(
tuple(range(15)),
b"\x9f\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e",
),
(
tuple(range(16)),
b"\xdc\x00\x10\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
),
({}, b"\x80"),
(
{x: x for x in range(15)},
b"\x8f\x00\x00\x01\x01\x02\x02\x03\x03\x04\x04\x05\x05\x06\x06\x07\x07\x08\x08\t\t\n\n\x0b\x0b\x0c\x0c\r\r\x0e\x0e",
),
(
{x: x for x in range(16)},
b"\xde\x00\x10\x00\x00\x01\x01\x02\x02\x03\x03\x04\x04\x05\x05\x06\x06\x07\x07\x08\x08\t\t\n\n\x0b\x0b\x0c\x0c\r\r\x0e\x0e\x0f\x0f",
),
]
for v, p in cases:
match(v, p)
def test_unicode():
assert_equal('foobar', unpacks(packs(u'foobar')))
if __name__ == '__main__':
main()
def test_unicode():
assert unpackb(packb("foobar"), use_list=1) == "foobar"

View file

@ -1,14 +1,63 @@
#!/usr/bin/env python
# coding: utf-8
from nose.tools import *
from msgpack import packs, unpacks
import datetime
def test_raise_on_find_unsupported_value():
assert_raises(TypeError, packs, datetime.datetime.now())
from pytest import raises
if __name__ == '__main__':
from nose import main
main()
from msgpack import FormatError, OutOfData, StackError, Unpacker, packb, unpackb
class DummyException(Exception):
pass
def test_raise_on_find_unsupported_value():
with raises(TypeError):
packb(datetime.datetime.now())
def test_raise_from_object_hook():
def hook(obj):
raise DummyException
raises(DummyException, unpackb, packb({}), object_hook=hook)
raises(DummyException, unpackb, packb({"fizz": "buzz"}), object_hook=hook)
raises(DummyException, unpackb, packb({"fizz": "buzz"}), object_pairs_hook=hook)
raises(DummyException, unpackb, packb({"fizz": {"buzz": "spam"}}), object_hook=hook)
raises(
DummyException,
unpackb,
packb({"fizz": {"buzz": "spam"}}),
object_pairs_hook=hook,
)
def test_invalidvalue():
incomplete = b"\xd9\x97#DL_" # raw8 - length=0x97
with raises(ValueError):
unpackb(incomplete)
with raises(OutOfData):
unpacker = Unpacker()
unpacker.feed(incomplete)
unpacker.unpack()
with raises(FormatError):
unpackb(b"\xc1") # (undefined tag)
with raises(FormatError):
unpackb(b"\x91\xc1") # fixarray(len=1) [ (undefined tag) ]
with raises(StackError):
unpackb(b"\x91" * 3000) # nested fixarray(len=1)
def test_strict_map_key():
valid = {"unicode": 1, b"bytes": 2}
packed = packb(valid, use_bin_type=True)
assert valid == unpackb(packed, raw=False, strict_map_key=True)
invalid = {42: 1}
packed = packb(invalid, use_bin_type=True)
with raises(ValueError):
unpackb(packed, raw=False, strict_map_key=True)

78
test/test_extension.py Normal file
View file

@ -0,0 +1,78 @@
import array
import msgpack
from msgpack import ExtType
def test_pack_ext_type():
def p(s):
packer = msgpack.Packer()
packer.pack_ext_type(0x42, s)
return packer.bytes()
assert p(b"A") == b"\xd4\x42A" # fixext 1
assert p(b"AB") == b"\xd5\x42AB" # fixext 2
assert p(b"ABCD") == b"\xd6\x42ABCD" # fixext 4
assert p(b"ABCDEFGH") == b"\xd7\x42ABCDEFGH" # fixext 8
assert p(b"A" * 16) == b"\xd8\x42" + b"A" * 16 # fixext 16
assert p(b"ABC") == b"\xc7\x03\x42ABC" # ext 8
assert p(b"A" * 0x0123) == b"\xc8\x01\x23\x42" + b"A" * 0x0123 # ext 16
assert p(b"A" * 0x00012345) == b"\xc9\x00\x01\x23\x45\x42" + b"A" * 0x00012345 # ext 32
def test_unpack_ext_type():
def check(b, expected):
assert msgpack.unpackb(b) == expected
check(b"\xd4\x42A", ExtType(0x42, b"A")) # fixext 1
check(b"\xd5\x42AB", ExtType(0x42, b"AB")) # fixext 2
check(b"\xd6\x42ABCD", ExtType(0x42, b"ABCD")) # fixext 4
check(b"\xd7\x42ABCDEFGH", ExtType(0x42, b"ABCDEFGH")) # fixext 8
check(b"\xd8\x42" + b"A" * 16, ExtType(0x42, b"A" * 16)) # fixext 16
check(b"\xc7\x03\x42ABC", ExtType(0x42, b"ABC")) # ext 8
check(b"\xc8\x01\x23\x42" + b"A" * 0x0123, ExtType(0x42, b"A" * 0x0123)) # ext 16
check(
b"\xc9\x00\x01\x23\x45\x42" + b"A" * 0x00012345,
ExtType(0x42, b"A" * 0x00012345),
) # ext 32
def test_extension_type():
def default(obj):
print("default called", obj)
if isinstance(obj, array.array):
typecode = 123 # application specific typecode
try:
data = obj.tobytes()
except AttributeError:
data = obj.tostring()
return ExtType(typecode, data)
raise TypeError(f"Unknown type object {obj!r}")
def ext_hook(code, data):
print("ext_hook called", code, data)
assert code == 123
obj = array.array("d")
obj.frombytes(data)
return obj
obj = [42, b"hello", array.array("d", [1.1, 2.2, 3.3])]
s = msgpack.packb(obj, default=default)
obj2 = msgpack.unpackb(s, ext_hook=ext_hook)
assert obj == obj2
def test_overriding_hooks():
def default(obj):
if isinstance(obj, int):
return {"__type__": "long", "__data__": str(obj)}
else:
return obj
obj = {"testval": 1823746192837461928374619}
refobj = {"testval": default(obj["testval"])}
refout = msgpack.packb(refobj)
assert isinstance(refout, (str, bytes))
testout = msgpack.packb(obj, default=default)
assert refout == testout

View file

@ -1,75 +1,88 @@
#!/usr/bin/env python
# coding: utf-8
from nose import main
from nose.tools import *
from msgpack import unpacks
from msgpack import unpackb
def check(src, should, use_list=0, raw=True):
assert unpackb(src, use_list=use_list, raw=raw, strict_map_key=False) == should
def check(src, should):
assert_equal(unpacks(src), should)
def testSimpleValue():
check("\x93\xc0\xc2\xc3",
(None, False, True,))
check(b"\x93\xc0\xc2\xc3", (None, False, True))
def testFixnum():
check("\x92\x93\x00\x40\x7f\x93\xe0\xf0\xff",
((0,64,127,), (-32,-16,-1,),)
)
check(b"\x92\x93\x00\x40\x7f\x93\xe0\xf0\xff", ((0, 64, 127), (-32, -16, -1)))
def testFixArray():
check("\x92\x90\x91\x91\xc0",
((),((None,),),),
)
check(b"\x92\x90\x91\x91\xc0", ((), ((None,),)))
def testFixRaw():
check("\x94\xa0\xa1a\xa2bc\xa3def",
("", "a", "bc", "def",),
)
check(b"\x94\xa0\xa1a\xa2bc\xa3def", (b"", b"a", b"bc", b"def"))
def testFixMap():
check(
"\x82\xc2\x81\xc0\xc0\xc3\x81\xc0\x80",
{False: {None: None}, True:{None:{}}},
)
check(b"\x82\xc2\x81\xc0\xc0\xc3\x81\xc0\x80", {False: {None: None}, True: {None: {}}})
def testUnsignedInt():
check(
"\x99\xcc\x00\xcc\x80\xcc\xff\xcd\x00\x00\xcd\x80\x00"
"\xcd\xff\xff\xce\x00\x00\x00\x00\xce\x80\x00\x00\x00"
"\xce\xff\xff\xff\xff",
(0, 128, 255, 0, 32768, 65535, 0, 2147483648, 4294967295,),
)
b"\x99\xcc\x00\xcc\x80\xcc\xff\xcd\x00\x00\xcd\x80\x00"
b"\xcd\xff\xff\xce\x00\x00\x00\x00\xce\x80\x00\x00\x00"
b"\xce\xff\xff\xff\xff",
(0, 128, 255, 0, 32768, 65535, 0, 2147483648, 4294967295),
)
def testSignedInt():
check("\x99\xd0\x00\xd0\x80\xd0\xff\xd1\x00\x00\xd1\x80\x00"
"\xd1\xff\xff\xd2\x00\x00\x00\x00\xd2\x80\x00\x00\x00"
"\xd2\xff\xff\xff\xff",
(0, -128, -1, 0, -32768, -1, 0, -2147483648, -1,))
check(
b"\x99\xd0\x00\xd0\x80\xd0\xff\xd1\x00\x00\xd1\x80\x00"
b"\xd1\xff\xff\xd2\x00\x00\x00\x00\xd2\x80\x00\x00\x00"
b"\xd2\xff\xff\xff\xff",
(0, -128, -1, 0, -32768, -1, 0, -2147483648, -1),
)
def testRaw():
check("\x96\xda\x00\x00\xda\x00\x01a\xda\x00\x02ab\xdb\x00\x00"
"\x00\x00\xdb\x00\x00\x00\x01a\xdb\x00\x00\x00\x02ab",
("", "a", "ab", "", "a", "ab"))
check(
b"\x96\xda\x00\x00\xda\x00\x01a\xda\x00\x02ab\xdb\x00\x00"
b"\x00\x00\xdb\x00\x00\x00\x01a\xdb\x00\x00\x00\x02ab",
(b"", b"a", b"ab", b"", b"a", b"ab"),
)
check(
b"\x96\xda\x00\x00\xda\x00\x01a\xda\x00\x02ab\xdb\x00\x00"
b"\x00\x00\xdb\x00\x00\x00\x01a\xdb\x00\x00\x00\x02ab",
("", "a", "ab", "", "a", "ab"),
raw=False,
)
def testArray():
check("\x96\xdc\x00\x00\xdc\x00\x01\xc0\xdc\x00\x02\xc2\xc3\xdd\x00"
"\x00\x00\x00\xdd\x00\x00\x00\x01\xc0\xdd\x00\x00\x00\x02"
"\xc2\xc3",
((), (None,), (False,True), (), (None,), (False,True))
)
check(
b"\x96\xdc\x00\x00\xdc\x00\x01\xc0\xdc\x00\x02\xc2\xc3\xdd\x00"
b"\x00\x00\x00\xdd\x00\x00\x00\x01\xc0\xdd\x00\x00\x00\x02"
b"\xc2\xc3",
((), (None,), (False, True), (), (None,), (False, True)),
)
def testMap():
check(
"\x96"
"\xde\x00\x00"
"\xde\x00\x01\xc0\xc2"
"\xde\x00\x02\xc0\xc2\xc3\xc2"
"\xdf\x00\x00\x00\x00"
"\xdf\x00\x00\x00\x01\xc0\xc2"
"\xdf\x00\x00\x00\x02\xc0\xc2\xc3\xc2",
({}, {None: False}, {True: False, None: False}, {},
{None: False}, {True: False, None: False}))
if __name__ == '__main__':
main()
b"\x96"
b"\xde\x00\x00"
b"\xde\x00\x01\xc0\xc2"
b"\xde\x00\x02\xc0\xc2\xc3\xc2"
b"\xdf\x00\x00\x00\x00"
b"\xdf\x00\x00\x00\x01\xc0\xc2"
b"\xdf\x00\x00\x00\x02\xc0\xc2\xc3\xc2",
(
{},
{None: False},
{True: False, None: False},
{},
{None: False},
{True: False, None: False},
),
)

165
test/test_limits.py Normal file
View file

@ -0,0 +1,165 @@
#!/usr/bin/env python
import pytest
from msgpack import (
ExtType,
Packer,
PackOverflowError,
PackValueError,
Unpacker,
UnpackValueError,
packb,
unpackb,
)
def test_integer():
x = -(2**63)
assert unpackb(packb(x)) == x
with pytest.raises(PackOverflowError):
packb(x - 1)
x = 2**64 - 1
assert unpackb(packb(x)) == x
with pytest.raises(PackOverflowError):
packb(x + 1)
def test_array_header():
packer = Packer()
packer.pack_array_header(2**32 - 1)
with pytest.raises(PackValueError):
packer.pack_array_header(2**32)
def test_map_header():
packer = Packer()
packer.pack_map_header(2**32 - 1)
with pytest.raises(PackValueError):
packer.pack_array_header(2**32)
def test_max_str_len():
d = "x" * 3
packed = packb(d)
unpacker = Unpacker(max_str_len=3, raw=False)
unpacker.feed(packed)
assert unpacker.unpack() == d
unpacker = Unpacker(max_str_len=2, raw=False)
with pytest.raises(UnpackValueError):
unpacker.feed(packed)
unpacker.unpack()
def test_max_bin_len():
d = b"x" * 3
packed = packb(d, use_bin_type=True)
unpacker = Unpacker(max_bin_len=3)
unpacker.feed(packed)
assert unpacker.unpack() == d
unpacker = Unpacker(max_bin_len=2)
with pytest.raises(UnpackValueError):
unpacker.feed(packed)
unpacker.unpack()
def test_max_array_len():
d = [1, 2, 3]
packed = packb(d)
unpacker = Unpacker(max_array_len=3)
unpacker.feed(packed)
assert unpacker.unpack() == d
unpacker = Unpacker(max_array_len=2)
with pytest.raises(UnpackValueError):
unpacker.feed(packed)
unpacker.unpack()
def test_max_map_len():
d = {1: 2, 3: 4, 5: 6}
packed = packb(d)
unpacker = Unpacker(max_map_len=3, strict_map_key=False)
unpacker.feed(packed)
assert unpacker.unpack() == d
unpacker = Unpacker(max_map_len=2, strict_map_key=False)
with pytest.raises(UnpackValueError):
unpacker.feed(packed)
unpacker.unpack()
def test_max_ext_len():
d = ExtType(42, b"abc")
packed = packb(d)
unpacker = Unpacker(max_ext_len=3)
unpacker.feed(packed)
assert unpacker.unpack() == d
unpacker = Unpacker(max_ext_len=2)
with pytest.raises(UnpackValueError):
unpacker.feed(packed)
unpacker.unpack()
# PyPy fails following tests because of constant folding?
# https://bugs.pypy.org/issue1721
# @pytest.mark.skipif(True, reason="Requires very large memory.")
# def test_binary():
# x = b'x' * (2**32 - 1)
# assert unpackb(packb(x)) == x
# del x
# x = b'x' * (2**32)
# with pytest.raises(ValueError):
# packb(x)
#
#
# @pytest.mark.skipif(True, reason="Requires very large memory.")
# def test_string():
# x = 'x' * (2**32 - 1)
# assert unpackb(packb(x)) == x
# x += 'y'
# with pytest.raises(ValueError):
# packb(x)
#
#
# @pytest.mark.skipif(True, reason="Requires very large memory.")
# def test_array():
# x = [0] * (2**32 - 1)
# assert unpackb(packb(x)) == x
# x.append(0)
# with pytest.raises(ValueError):
# packb(x)
# auto max len
def test_auto_max_array_len():
packed = b"\xde\x00\x06zz"
with pytest.raises(UnpackValueError):
unpackb(packed, raw=False)
unpacker = Unpacker(max_buffer_size=5, raw=False)
unpacker.feed(packed)
with pytest.raises(UnpackValueError):
unpacker.unpack()
def test_auto_max_map_len():
# len(packed) == 6 -> max_map_len == 3
packed = b"\xde\x00\x04zzz"
with pytest.raises(UnpackValueError):
unpackb(packed, raw=False)
unpacker = Unpacker(max_buffer_size=6, raw=False)
unpacker.feed(packed)
with pytest.raises(UnpackValueError):
unpacker.unpack()

99
test/test_memoryview.py Normal file
View file

@ -0,0 +1,99 @@
#!/usr/bin/env python
from array import array
from msgpack import packb, unpackb
def make_array(f, data):
a = array(f)
a.frombytes(data)
return a
def _runtest(format, nbytes, expected_header, expected_prefix, use_bin_type):
# create a new array
original_array = array(format)
original_array.fromlist([255] * (nbytes // original_array.itemsize))
original_data = original_array.tobytes()
view = memoryview(original_array)
# pack, unpack, and reconstruct array
packed = packb(view, use_bin_type=use_bin_type)
unpacked = unpackb(packed, raw=(not use_bin_type))
reconstructed_array = make_array(format, unpacked)
# check that we got the right amount of data
assert len(original_data) == nbytes
# check packed header
assert packed[:1] == expected_header
# check packed length prefix, if any
assert packed[1 : 1 + len(expected_prefix)] == expected_prefix
# check packed data
assert packed[1 + len(expected_prefix) :] == original_data
# check array unpacked correctly
assert original_array == reconstructed_array
def test_fixstr_from_byte():
_runtest("B", 1, b"\xa1", b"", False)
_runtest("B", 31, b"\xbf", b"", False)
def test_fixstr_from_float():
_runtest("f", 4, b"\xa4", b"", False)
_runtest("f", 28, b"\xbc", b"", False)
def test_str16_from_byte():
_runtest("B", 2**8, b"\xda", b"\x01\x00", False)
_runtest("B", 2**16 - 1, b"\xda", b"\xff\xff", False)
def test_str16_from_float():
_runtest("f", 2**8, b"\xda", b"\x01\x00", False)
_runtest("f", 2**16 - 4, b"\xda", b"\xff\xfc", False)
def test_str32_from_byte():
_runtest("B", 2**16, b"\xdb", b"\x00\x01\x00\x00", False)
def test_str32_from_float():
_runtest("f", 2**16, b"\xdb", b"\x00\x01\x00\x00", False)
def test_bin8_from_byte():
_runtest("B", 1, b"\xc4", b"\x01", True)
_runtest("B", 2**8 - 1, b"\xc4", b"\xff", True)
def test_bin8_from_float():
_runtest("f", 4, b"\xc4", b"\x04", True)
_runtest("f", 2**8 - 4, b"\xc4", b"\xfc", True)
def test_bin16_from_byte():
_runtest("B", 2**8, b"\xc5", b"\x01\x00", True)
_runtest("B", 2**16 - 1, b"\xc5", b"\xff\xff", True)
def test_bin16_from_float():
_runtest("f", 2**8, b"\xc5", b"\x01\x00", True)
_runtest("f", 2**16 - 4, b"\xc5", b"\xff\xfc", True)
def test_bin32_from_byte():
_runtest("B", 2**16, b"\xc6", b"\x00\x01\x00\x00", True)
def test_bin32_from_float():
_runtest("f", 2**16, b"\xc6", b"\x00\x01\x00\x00", True)
def test_multidim_memoryview():
# See https://github.com/msgpack/msgpack-python/issues/526
view = memoryview(b"\00" * 6)
data = view.cast(view.format, (3, 2))
packed = packb(data)
assert packed == b"\xc4\x06\x00\x00\x00\x00\x00\x00"

90
test/test_newspec.py Normal file
View file

@ -0,0 +1,90 @@
from msgpack import ExtType, packb, unpackb
def test_str8():
header = b"\xd9"
data = b"x" * 32
b = packb(data.decode(), use_bin_type=True)
assert len(b) == len(data) + 2
assert b[0:2] == header + b"\x20"
assert b[2:] == data
assert unpackb(b, raw=True) == data
assert unpackb(b, raw=False) == data.decode()
data = b"x" * 255
b = packb(data.decode(), use_bin_type=True)
assert len(b) == len(data) + 2
assert b[0:2] == header + b"\xff"
assert b[2:] == data
assert unpackb(b, raw=True) == data
assert unpackb(b, raw=False) == data.decode()
def test_bin8():
header = b"\xc4"
data = b""
b = packb(data, use_bin_type=True)
assert len(b) == len(data) + 2
assert b[0:2] == header + b"\x00"
assert b[2:] == data
assert unpackb(b) == data
data = b"x" * 255
b = packb(data, use_bin_type=True)
assert len(b) == len(data) + 2
assert b[0:2] == header + b"\xff"
assert b[2:] == data
assert unpackb(b) == data
def test_bin16():
header = b"\xc5"
data = b"x" * 256
b = packb(data, use_bin_type=True)
assert len(b) == len(data) + 3
assert b[0:1] == header
assert b[1:3] == b"\x01\x00"
assert b[3:] == data
assert unpackb(b) == data
data = b"x" * 65535
b = packb(data, use_bin_type=True)
assert len(b) == len(data) + 3
assert b[0:1] == header
assert b[1:3] == b"\xff\xff"
assert b[3:] == data
assert unpackb(b) == data
def test_bin32():
header = b"\xc6"
data = b"x" * 65536
b = packb(data, use_bin_type=True)
assert len(b) == len(data) + 5
assert b[0:1] == header
assert b[1:5] == b"\x00\x01\x00\x00"
assert b[5:] == data
assert unpackb(b) == data
def test_ext():
def check(ext, packed):
assert packb(ext) == packed
assert unpackb(packed) == ext
check(ExtType(0x42, b"Z"), b"\xd4\x42Z") # fixext 1
check(ExtType(0x42, b"ZZ"), b"\xd5\x42ZZ") # fixext 2
check(ExtType(0x42, b"Z" * 4), b"\xd6\x42" + b"Z" * 4) # fixext 4
check(ExtType(0x42, b"Z" * 8), b"\xd7\x42" + b"Z" * 8) # fixext 8
check(ExtType(0x42, b"Z" * 16), b"\xd8\x42" + b"Z" * 16) # fixext 16
# ext 8
check(ExtType(0x42, b""), b"\xc7\x00\x42")
check(ExtType(0x42, b"Z" * 255), b"\xc7\xff\x42" + b"Z" * 255)
# ext 16
check(ExtType(0x42, b"Z" * 256), b"\xc8\x01\x00\x42" + b"Z" * 256)
check(ExtType(0x42, b"Z" * 0xFFFF), b"\xc8\xff\xff\x42" + b"Z" * 0xFFFF)
# ext 32
check(ExtType(0x42, b"Z" * 0x10000), b"\xc9\x00\x01\x00\x00\x42" + b"Z" * 0x10000)
# needs large memory
# check(ExtType(0x42, b'Z'*0xffffffff),
# b'\xc9\xff\xff\xff\xff\x42' + b'Z'*0xffffffff)

View file

@ -1,46 +1,82 @@
#!/usr/bin/env python
# coding: utf-8
from nose import main
from nose.tools import *
from pytest import raises
from msgpack import packb, unpackb
from msgpack import packs, unpacks
def _decode_complex(obj):
if '__complex__' in obj:
return complex(obj['real'], obj['imag'])
if b"__complex__" in obj:
return complex(obj[b"real"], obj[b"imag"])
return obj
def _encode_complex(obj):
if isinstance(obj, complex):
return {'__complex__': True, 'real': 1, 'imag': 2}
return {b"__complex__": True, b"real": 1, b"imag": 2}
return obj
def test_encode_hook():
packed = packs([3, 1+2j], default=_encode_complex)
unpacked = unpacks(packed)
eq_(unpacked[1], {'__complex__': True, 'real': 1, 'imag': 2})
packed = packb([3, 1 + 2j], default=_encode_complex)
unpacked = unpackb(packed, use_list=1)
assert unpacked[1] == {b"__complex__": True, b"real": 1, b"imag": 2}
def test_decode_hook():
packed = packs([3, {'__complex__': True, 'real': 1, 'imag': 2}])
unpacked = unpacks(packed, object_hook=_decode_complex)
eq_(unpacked[1], 1+2j)
packed = packb([3, {b"__complex__": True, b"real": 1, b"imag": 2}])
unpacked = unpackb(packed, object_hook=_decode_complex, use_list=1)
assert unpacked[1] == 1 + 2j
def test_decode_pairs_hook():
packed = packb([3, {1: 2, 3: 4}])
prod_sum = 1 * 2 + 3 * 4
unpacked = unpackb(
packed,
object_pairs_hook=lambda lst: sum(k * v for k, v in lst),
use_list=1,
strict_map_key=False,
)
assert unpacked[1] == prod_sum
def test_only_one_obj_hook():
with raises(TypeError):
unpackb(b"", object_hook=lambda x: x, object_pairs_hook=lambda x: x)
@raises(ValueError)
def test_bad_hook():
packed = packs([3, 1+2j], default=lambda o: o)
unpacked = unpacks(packed)
with raises(TypeError):
packed = packb([3, 1 + 2j], default=lambda o: o)
unpackb(packed, use_list=1)
def _arr_to_str(arr):
return ''.join(str(c) for c in arr)
return "".join(str(c) for c in arr)
def test_array_hook():
packed = packs([1,2,3])
unpacked = unpacks(packed, list_hook=_arr_to_str)
eq_(unpacked, '123')
packed = packb([1, 2, 3])
unpacked = unpackb(packed, list_hook=_arr_to_str, use_list=1)
assert unpacked == "123"
if __name__ == '__main__':
test_decode_hook()
test_encode_hook()
test_bad_hook()
test_array_hook()
class DecodeError(Exception):
pass
def bad_complex_decoder(o):
raise DecodeError("Ooops!")
def test_an_exception_in_objecthook1():
with raises(DecodeError):
packed = packb({1: {"__complex__": True, "real": 1, "imag": 2}})
unpackb(packed, object_hook=bad_complex_decoder, strict_map_key=False)
def test_an_exception_in_objecthook2():
with raises(DecodeError):
packed = packb({1: [{"__complex__": True, "real": 1, "imag": 2}]})
unpackb(packed, list_hook=bad_complex_decoder, use_list=1, strict_map_key=False)

View file

@ -1,88 +1,181 @@
#!/usr/bin/env python
# coding: utf-8
from nose import main
from nose.tools import *
from nose.plugins.skip import SkipTest
import struct
from collections import OrderedDict
from io import BytesIO
from msgpack import packs, unpacks, Packer, Unpacker
import pytest
from StringIO import StringIO
from msgpack import Packer, Unpacker, packb, unpackb
def check(data, use_list=False):
re = unpackb(packb(data), use_list=use_list, strict_map_key=False)
assert re == data
def check(data):
re = unpacks(packs(data))
assert_equal(re, data)
def testPack():
test_data = [
0, 1, 127, 128, 255, 256, 65535, 65536,
-1, -32, -33, -128, -129, -32768, -32769,
1.0,
"", "a", "a"*31, "a"*32,
None, True, False,
(), ((),), ((), None,),
0,
1,
127,
128,
255,
256,
65535,
65536,
4294967295,
4294967296,
-1,
-32,
-33,
-128,
-129,
-32768,
-32769,
-4294967296,
-4294967297,
1.0,
b"",
b"a",
b"a" * 31,
b"a" * 32,
None,
True,
False,
(),
((),),
((), None),
{None: 0},
(1<<23),
]
(1 << 23),
]
for td in test_data:
check(td)
def testPackUnicode():
test_data = [
u"", u"abcd", (u"defgh",), u"Русский текст",
]
test_data = ["", "abcd", ["defgh"], "Русский текст"]
for td in test_data:
re = unpacks(packs(td, encoding='utf-8'), encoding='utf-8')
assert_equal(re, td)
packer = Packer(encoding='utf-8')
re = unpackb(packb(td), use_list=1, raw=False)
assert re == td
packer = Packer()
data = packer.pack(td)
re = Unpacker(StringIO(data), encoding='utf-8').unpack()
assert_equal(re, td)
re = Unpacker(BytesIO(data), raw=False, use_list=1).unpack()
assert re == td
def testPackUTF32():
try:
test_data = [
u"", u"abcd", (u"defgh",), u"Русский текст",
]
for td in test_data:
re = unpacks(packs(td, encoding='utf-32'), encoding='utf-32')
assert_equal(re, td)
except LookupError:
raise SkipTest
def testPackBytes():
test_data = [
"", "abcd", ("defgh",),
]
test_data = [b"", b"abcd", (b"defgh",)]
for td in test_data:
check(td)
def testPackByteArrays():
test_data = [bytearray(b""), bytearray(b"abcd"), (bytearray(b"defgh"),)]
for td in test_data:
check(td)
def testIgnoreUnicodeErrors():
re = unpacks(packs('abc\xeddef'),
encoding='ascii', unicode_errors='ignore')
assert_equal(re, "abcdef")
re = unpackb(packb(b"abc\xeddef", use_bin_type=False), raw=False, unicode_errors="ignore")
assert re == "abcdef"
@raises(UnicodeDecodeError)
def testStrictUnicodeUnpack():
unpacks(packs('abc\xeddef'), encoding='utf-8')
packed = packb(b"abc\xeddef", use_bin_type=False)
with pytest.raises(UnicodeDecodeError):
unpackb(packed, raw=False, use_list=1)
@raises(UnicodeEncodeError)
def testStrictUnicodePack():
packs(u"abc\xeddef", encoding='ascii', unicode_errors='strict')
def testIgnoreErrorsPack():
re = unpacks(
packs(u"abcФФФdef", encoding='ascii', unicode_errors='ignore'),
encoding='utf-8')
assert_equal(re, u"abcdef")
re = unpackb(
packb("abc\udc80\udcffdef", use_bin_type=True, unicode_errors="ignore"),
raw=False,
use_list=1,
)
assert re == "abcdef"
@raises(TypeError)
def testNoEncoding():
packs(u"abc", encoding=None)
def testDecodeBinary():
re = unpacks(packs(u"abc"), encoding=None)
assert_equal(re, "abc")
re = unpackb(packb(b"abc"), use_list=1)
assert re == b"abc"
if __name__ == '__main__':
main()
def testPackFloat():
assert packb(1.0, use_single_float=True) == b"\xca" + struct.pack(">f", 1.0)
assert packb(1.0, use_single_float=False) == b"\xcb" + struct.pack(">d", 1.0)
def testArraySize(sizes=[0, 5, 50, 1000]):
bio = BytesIO()
packer = Packer()
for size in sizes:
bio.write(packer.pack_array_header(size))
for i in range(size):
bio.write(packer.pack(i))
bio.seek(0)
unpacker = Unpacker(bio, use_list=1)
for size in sizes:
assert unpacker.unpack() == list(range(size))
def test_manualreset(sizes=[0, 5, 50, 1000]):
packer = Packer(autoreset=False)
for size in sizes:
packer.pack_array_header(size)
for i in range(size):
packer.pack(i)
bio = BytesIO(packer.bytes())
unpacker = Unpacker(bio, use_list=1)
for size in sizes:
assert unpacker.unpack() == list(range(size))
packer.reset()
assert packer.bytes() == b""
def testMapSize(sizes=[0, 5, 50, 1000]):
bio = BytesIO()
packer = Packer()
for size in sizes:
bio.write(packer.pack_map_header(size))
for i in range(size):
bio.write(packer.pack(i)) # key
bio.write(packer.pack(i * 2)) # value
bio.seek(0)
unpacker = Unpacker(bio, strict_map_key=False)
for size in sizes:
assert unpacker.unpack() == {i: i * 2 for i in range(size)}
def test_odict():
seq = [(b"one", 1), (b"two", 2), (b"three", 3), (b"four", 4)]
od = OrderedDict(seq)
assert unpackb(packb(od), use_list=1) == dict(seq)
def pair_hook(seq):
return list(seq)
assert unpackb(packb(od), object_pairs_hook=pair_hook, use_list=1) == seq
def test_pairlist():
pairlist = [(b"a", 1), (2, b"b"), (b"foo", b"bar")]
packer = Packer()
packed = packer.pack_map_pairs(pairlist)
unpacked = unpackb(packed, object_pairs_hook=list, strict_map_key=False)
assert pairlist == unpacked
def test_get_buffer():
packer = Packer(autoreset=0, use_bin_type=True)
packer.pack([1, 2])
strm = BytesIO()
strm.write(packer.getbuffer())
written = strm.getvalue()
expected = packb([1, 2], use_bin_type=True)
assert written == expected

72
test/test_read_size.py Normal file
View file

@ -0,0 +1,72 @@
"""Test Unpacker's read_array_header and read_map_header methods"""
from msgpack import OutOfData, Unpacker, packb
UnexpectedTypeException = ValueError
def test_read_array_header():
unpacker = Unpacker()
unpacker.feed(packb(["a", "b", "c"]))
assert unpacker.read_array_header() == 3
assert unpacker.unpack() == "a"
assert unpacker.unpack() == "b"
assert unpacker.unpack() == "c"
try:
unpacker.unpack()
assert 0, "should raise exception"
except OutOfData:
assert 1, "okay"
def test_read_map_header():
unpacker = Unpacker()
unpacker.feed(packb({"a": "A"}))
assert unpacker.read_map_header() == 1
assert unpacker.unpack() == "a"
assert unpacker.unpack() == "A"
try:
unpacker.unpack()
assert 0, "should raise exception"
except OutOfData:
assert 1, "okay"
def test_incorrect_type_array():
unpacker = Unpacker()
unpacker.feed(packb(1))
try:
unpacker.read_array_header()
assert 0, "should raise exception"
except UnexpectedTypeException:
assert 1, "okay"
def test_incorrect_type_map():
unpacker = Unpacker()
unpacker.feed(packb(1))
try:
unpacker.read_map_header()
assert 0, "should raise exception"
except UnexpectedTypeException:
assert 1, "okay"
def test_correct_type_nested_array():
unpacker = Unpacker()
unpacker.feed(packb({"a": ["b", "c", "d"]}))
try:
unpacker.read_array_header()
assert 0, "should raise exception"
except UnexpectedTypeException:
assert 1, "okay"
def test_incorrect_type_nested_map():
unpacker = Unpacker()
unpacker.feed(packb([{"a": "b"}]))
try:
unpacker.read_map_header()
assert 0, "should raise exception"
except UnexpectedTypeException:
assert 1, "okay"

View file

@ -1,49 +1,41 @@
#!/usr/bin/env python
# coding: utf-8
# ruff: noqa: E501
# ignore line length limit for long comments
import io
from nose import main
from nose.tools import *
import StringIO
import msgpack
binarydata = [chr(i) for i in xrange(256)]
binarydata = "".join(binarydata)
binarydata = bytes(bytearray(range(256)))
def gen_binary_data(idx):
data = binarydata[:idx % 300]
return data
return binarydata[: idx % 300]
def test_exceeding_unpacker_read_size():
dumpf = StringIO.StringIO()
dumpf = io.BytesIO()
packer = msgpack.Packer()
NUMBER_OF_STRINGS = 6
read_size = 16
# 5 ok for read_size=16, while 6 glibc detected *** python: double free or corruption (fasttop):
# 20 ok for read_size=256, while 25 segfaults / glibc detected *** python: double free or corruption (!prev)
# 40 ok for read_size=1024, while 50 introduces errors
# 7000 ok for read_size=1024*1024, while 8000 leads to glibc detected *** python: double free or corruption (!prev):
# 5 ok for read_size=16, while 6 glibc detected *** python: double free or corruption (fasttop):
# 20 ok for read_size=256, while 25 segfaults / glibc detected *** python: double free or corruption (!prev)
# 40 ok for read_size=1024, while 50 introduces errors
# 7000 ok for read_size=1024*1024, while 8000 leads to glibc detected *** python: double free or corruption (!prev):
for idx in xrange(NUMBER_OF_STRINGS):
for idx in range(NUMBER_OF_STRINGS):
data = gen_binary_data(idx)
dumpf.write(packer.pack(data))
f = StringIO.StringIO(dumpf.getvalue())
f = io.BytesIO(dumpf.getvalue())
dumpf.close()
unpacker = msgpack.Unpacker(f, read_size=read_size)
unpacker = msgpack.Unpacker(f, read_size=read_size, use_list=1)
read_count = 0
for idx, o in enumerate(unpacker):
assert_equal(type(o), str)
assert_equal(o, gen_binary_data(idx))
assert isinstance(o, bytes)
assert o == gen_binary_data(idx)
read_count += 1
assert_equal(read_count, NUMBER_OF_STRINGS)
if __name__ == '__main__':
# main()
test_exceeding_unpacker_read_size()
assert read_count == NUMBER_OF_STRINGS

View file

@ -1,34 +1,148 @@
#!/usr/bin/env python
# coding: utf-8
import io
from pytest import raises
from msgpack import BufferFull, Unpacker, pack, packb
from msgpack.exceptions import OutOfData
def test_partialdata():
unpacker = Unpacker()
unpacker.feed(b"\xa5")
with raises(StopIteration):
next(iter(unpacker))
unpacker.feed(b"h")
with raises(StopIteration):
next(iter(unpacker))
unpacker.feed(b"a")
with raises(StopIteration):
next(iter(unpacker))
unpacker.feed(b"l")
with raises(StopIteration):
next(iter(unpacker))
unpacker.feed(b"l")
with raises(StopIteration):
next(iter(unpacker))
unpacker.feed(b"o")
assert next(iter(unpacker)) == "hallo"
from msgpack import Unpacker
def test_foobar():
unpacker = Unpacker(read_size=3)
unpacker.feed('foobar')
assert unpacker.unpack() == ord('f')
assert unpacker.unpack() == ord('o')
assert unpacker.unpack() == ord('o')
assert unpacker.unpack() == ord('b')
assert unpacker.unpack() == ord('a')
assert unpacker.unpack() == ord('r')
try:
o = unpacker.unpack()
print "Oops!", o
assert 0
except StopIteration:
assert 1
else:
assert 0
unpacker.feed('foo')
unpacker.feed('bar')
unpacker = Unpacker(read_size=3, use_list=1)
unpacker.feed(b"foobar")
assert unpacker.unpack() == ord(b"f")
assert unpacker.unpack() == ord(b"o")
assert unpacker.unpack() == ord(b"o")
assert unpacker.unpack() == ord(b"b")
assert unpacker.unpack() == ord(b"a")
assert unpacker.unpack() == ord(b"r")
with raises(OutOfData):
unpacker.unpack()
unpacker.feed(b"foo")
unpacker.feed(b"bar")
k = 0
for o, e in zip(unpacker, 'foobarbaz'):
for o, e in zip(unpacker, "foobarbaz"):
assert o == ord(e)
k += 1
assert k == len('foobar')
assert k == len(b"foobar")
if __name__ == '__main__':
test_foobar()
def test_foobar_skip():
unpacker = Unpacker(read_size=3, use_list=1)
unpacker.feed(b"foobar")
assert unpacker.unpack() == ord(b"f")
unpacker.skip()
assert unpacker.unpack() == ord(b"o")
unpacker.skip()
assert unpacker.unpack() == ord(b"a")
unpacker.skip()
with raises(OutOfData):
unpacker.unpack()
def test_maxbuffersize():
with raises(ValueError):
Unpacker(read_size=5, max_buffer_size=3)
unpacker = Unpacker(read_size=3, max_buffer_size=3, use_list=1)
unpacker.feed(b"fo")
with raises(BufferFull):
unpacker.feed(b"ob")
unpacker.feed(b"o")
assert ord("f") == next(unpacker)
unpacker.feed(b"b")
assert ord("o") == next(unpacker)
assert ord("o") == next(unpacker)
assert ord("b") == next(unpacker)
def test_maxbuffersize_file():
buff = io.BytesIO(packb(b"a" * 10) + packb([b"a" * 20] * 2))
unpacker = Unpacker(buff, read_size=1, max_buffer_size=19, max_bin_len=20)
assert unpacker.unpack() == b"a" * 10
# assert unpacker.unpack() == [b"a" * 20]*2
with raises(BufferFull):
print(unpacker.unpack())
def test_readbytes():
unpacker = Unpacker(read_size=3)
unpacker.feed(b"foobar")
assert unpacker.unpack() == ord(b"f")
assert unpacker.read_bytes(3) == b"oob"
assert unpacker.unpack() == ord(b"a")
assert unpacker.unpack() == ord(b"r")
# Test buffer refill
unpacker = Unpacker(io.BytesIO(b"foobar"), read_size=3)
assert unpacker.unpack() == ord(b"f")
assert unpacker.read_bytes(3) == b"oob"
assert unpacker.unpack() == ord(b"a")
assert unpacker.unpack() == ord(b"r")
# Issue 352
u = Unpacker()
u.feed(b"x")
assert bytes(u.read_bytes(1)) == b"x"
with raises(StopIteration):
next(u)
u.feed(b"\1")
assert next(u) == 1
def test_issue124():
unpacker = Unpacker()
unpacker.feed(b"\xa1?\xa1!")
assert tuple(unpacker) == ("?", "!")
assert tuple(unpacker) == ()
unpacker.feed(b"\xa1?\xa1")
assert tuple(unpacker) == ("?",)
assert tuple(unpacker) == ()
unpacker.feed(b"!")
assert tuple(unpacker) == ("!",)
assert tuple(unpacker) == ()
def test_unpack_tell():
stream = io.BytesIO()
messages = [2**i - 1 for i in range(65)]
messages += [-(2**i) for i in range(1, 64)]
messages += [
b"hello",
b"hello" * 1000,
list(range(20)),
{i: bytes(i) * i for i in range(10)},
{i: bytes(i) * i for i in range(32)},
]
offsets = []
for m in messages:
pack(m, stream)
offsets.append(stream.tell())
stream.seek(0)
unpacker = Unpacker(stream, strict_map_key=False)
for m, o in zip(messages, offsets):
m2 = next(unpacker)
assert m == m2
assert o == unpacker.tell()

59
test/test_stricttype.py Normal file
View file

@ -0,0 +1,59 @@
from collections import namedtuple
from msgpack import ExtType, packb, unpackb
def test_namedtuple():
T = namedtuple("T", "foo bar")
def default(o):
if isinstance(o, T):
return dict(o._asdict())
raise TypeError(f"Unsupported type {type(o)}")
packed = packb(T(1, 42), strict_types=True, use_bin_type=True, default=default)
unpacked = unpackb(packed, raw=False)
assert unpacked == {"foo": 1, "bar": 42}
def test_tuple():
t = ("one", 2, b"three", (4,))
def default(o):
if isinstance(o, tuple):
return {"__type__": "tuple", "value": list(o)}
raise TypeError(f"Unsupported type {type(o)}")
def convert(o):
if o.get("__type__") == "tuple":
return tuple(o["value"])
return o
data = packb(t, strict_types=True, use_bin_type=True, default=default)
expected = unpackb(data, raw=False, object_hook=convert)
assert expected == t
def test_tuple_ext():
t = ("one", 2, b"three", (4,))
MSGPACK_EXT_TYPE_TUPLE = 0
def default(o):
if isinstance(o, tuple):
# Convert to list and pack
payload = packb(list(o), strict_types=True, use_bin_type=True, default=default)
return ExtType(MSGPACK_EXT_TYPE_TUPLE, payload)
raise TypeError(repr(o))
def convert(code, payload):
if code == MSGPACK_EXT_TYPE_TUPLE:
# Unpack and convert to tuple
return tuple(unpackb(payload, raw=False, ext_hook=convert))
raise ValueError(f"Unknown Ext code {code}")
data = packb(t, strict_types=True, use_bin_type=True, default=default)
expected = unpackb(data, raw=False, ext_hook=convert)
assert expected == t

26
test/test_subtype.py Normal file
View file

@ -0,0 +1,26 @@
#!/usr/bin/env python
from collections import namedtuple
from msgpack import packb
class MyList(list):
pass
class MyDict(dict):
pass
class MyTuple(tuple):
pass
MyNamedTuple = namedtuple("MyNamedTuple", "x y")
def test_types():
assert packb(MyDict()) == packb(dict())
assert packb(MyList()) == packb(list())
assert packb(MyNamedTuple(1, 2)) == packb((1, 2))

171
test/test_timestamp.py Normal file
View file

@ -0,0 +1,171 @@
import datetime
import pytest
import msgpack
from msgpack.ext import Timestamp
def test_timestamp():
# timestamp32
ts = Timestamp(2**32 - 1)
assert ts.to_bytes() == b"\xff\xff\xff\xff"
packed = msgpack.packb(ts)
assert packed == b"\xd6\xff" + ts.to_bytes()
unpacked = msgpack.unpackb(packed)
assert ts == unpacked
assert ts.seconds == 2**32 - 1 and ts.nanoseconds == 0
# timestamp64
ts = Timestamp(2**34 - 1, 999999999)
assert ts.to_bytes() == b"\xee\x6b\x27\xff\xff\xff\xff\xff"
packed = msgpack.packb(ts)
assert packed == b"\xd7\xff" + ts.to_bytes()
unpacked = msgpack.unpackb(packed)
assert ts == unpacked
assert ts.seconds == 2**34 - 1 and ts.nanoseconds == 999999999
# timestamp96
ts = Timestamp(2**63 - 1, 999999999)
assert ts.to_bytes() == b"\x3b\x9a\xc9\xff\x7f\xff\xff\xff\xff\xff\xff\xff"
packed = msgpack.packb(ts)
assert packed == b"\xc7\x0c\xff" + ts.to_bytes()
unpacked = msgpack.unpackb(packed)
assert ts == unpacked
assert ts.seconds == 2**63 - 1 and ts.nanoseconds == 999999999
# negative fractional
ts = Timestamp.from_unix(-2.3) # s: -3, ns: 700000000
assert ts.seconds == -3 and ts.nanoseconds == 700000000
assert ts.to_bytes() == b"\x29\xb9\x27\x00\xff\xff\xff\xff\xff\xff\xff\xfd"
packed = msgpack.packb(ts)
assert packed == b"\xc7\x0c\xff" + ts.to_bytes()
unpacked = msgpack.unpackb(packed)
assert ts == unpacked
def test_unpack_timestamp():
# timestamp 32
assert msgpack.unpackb(b"\xd6\xff\x00\x00\x00\x00") == Timestamp(0)
# timestamp 64
assert msgpack.unpackb(b"\xd7\xff" + b"\x00" * 8) == Timestamp(0)
with pytest.raises(ValueError):
msgpack.unpackb(b"\xd7\xff" + b"\xff" * 8)
# timestamp 96
assert msgpack.unpackb(b"\xc7\x0c\xff" + b"\x00" * 12) == Timestamp(0)
with pytest.raises(ValueError):
msgpack.unpackb(b"\xc7\x0c\xff" + b"\xff" * 12) == Timestamp(0)
# Undefined
with pytest.raises(ValueError):
msgpack.unpackb(b"\xd4\xff\x00") # fixext 1
with pytest.raises(ValueError):
msgpack.unpackb(b"\xd5\xff\x00\x00") # fixext 2
with pytest.raises(ValueError):
msgpack.unpackb(b"\xc7\x00\xff") # ext8 (len=0)
with pytest.raises(ValueError):
msgpack.unpackb(b"\xc7\x03\xff\0\0\0") # ext8 (len=3)
with pytest.raises(ValueError):
msgpack.unpackb(b"\xc7\x05\xff\0\0\0\0\0") # ext8 (len=5)
def test_timestamp_from():
t = Timestamp(42, 14000)
assert Timestamp.from_unix(42.000014) == t
assert Timestamp.from_unix_nano(42000014000) == t
def test_timestamp_to():
t = Timestamp(42, 14000)
assert t.to_unix() == 42.000014
assert t.to_unix_nano() == 42000014000
def test_timestamp_datetime():
t = Timestamp(42, 14)
utc = datetime.timezone.utc
assert t.to_datetime() == datetime.datetime(1970, 1, 1, 0, 0, 42, 0, tzinfo=utc)
ts = datetime.datetime(2024, 4, 16, 8, 43, 9, 420317, tzinfo=utc)
ts2 = datetime.datetime(2024, 4, 16, 8, 43, 9, 420318, tzinfo=utc)
assert (
Timestamp.from_datetime(ts2).nanoseconds - Timestamp.from_datetime(ts).nanoseconds == 1000
)
ts3 = datetime.datetime(2024, 4, 16, 8, 43, 9, 4256)
ts4 = datetime.datetime(2024, 4, 16, 8, 43, 9, 4257)
assert (
Timestamp.from_datetime(ts4).nanoseconds - Timestamp.from_datetime(ts3).nanoseconds == 1000
)
assert Timestamp.from_datetime(ts).to_datetime() == ts
def test_unpack_datetime():
t = Timestamp(42, 14)
utc = datetime.timezone.utc
packed = msgpack.packb(t)
unpacked = msgpack.unpackb(packed, timestamp=3)
assert unpacked == datetime.datetime(1970, 1, 1, 0, 0, 42, 0, tzinfo=utc)
def test_pack_unpack_before_epoch():
utc = datetime.timezone.utc
t_in = datetime.datetime(1960, 1, 1, tzinfo=utc)
packed = msgpack.packb(t_in, datetime=True)
unpacked = msgpack.unpackb(packed, timestamp=3)
assert unpacked == t_in
def test_pack_datetime():
t = Timestamp(42, 14000)
dt = t.to_datetime()
utc = datetime.timezone.utc
assert dt == datetime.datetime(1970, 1, 1, 0, 0, 42, 14, tzinfo=utc)
packed = msgpack.packb(dt, datetime=True)
packed2 = msgpack.packb(t)
assert packed == packed2
unpacked = msgpack.unpackb(packed)
print(packed, unpacked)
assert unpacked == t
unpacked = msgpack.unpackb(packed, timestamp=3)
assert unpacked == dt
x = []
packed = msgpack.packb(dt, datetime=False, default=x.append)
assert x
assert x[0] == dt
assert msgpack.unpackb(packed) is None
def test_issue451():
# https://github.com/msgpack/msgpack-python/issues/451
utc = datetime.timezone.utc
dt = datetime.datetime(2100, 1, 1, 1, 1, tzinfo=utc)
packed = msgpack.packb(dt, datetime=True)
assert packed == b"\xd6\xff\xf4\x86eL"
unpacked = msgpack.unpackb(packed, timestamp=3)
assert dt == unpacked
def test_pack_datetime_without_tzinfo():
dt = datetime.datetime(1970, 1, 1, 0, 0, 42, 14)
with pytest.raises(ValueError, match="where tzinfo=None"):
packed = msgpack.packb(dt, datetime=True)
dt = datetime.datetime(1970, 1, 1, 0, 0, 42, 14)
packed = msgpack.packb(dt, datetime=True, default=lambda x: None)
assert packed == msgpack.packb(None)
utc = datetime.timezone.utc
dt = datetime.datetime(1970, 1, 1, 0, 0, 42, 14, tzinfo=utc)
packed = msgpack.packb(dt, datetime=True)
unpacked = msgpack.unpackb(packed, timestamp=3)
assert unpacked == dt

89
test/test_unpack.py Normal file
View file

@ -0,0 +1,89 @@
import sys
from io import BytesIO
from pytest import mark, raises
from msgpack import ExtType, OutOfData, Unpacker, packb
def test_unpack_array_header_from_file():
f = BytesIO(packb([1, 2, 3, 4]))
unpacker = Unpacker(f)
assert unpacker.read_array_header() == 4
assert unpacker.unpack() == 1
assert unpacker.unpack() == 2
assert unpacker.unpack() == 3
assert unpacker.unpack() == 4
with raises(OutOfData):
unpacker.unpack()
@mark.skipif(
"not hasattr(sys, 'getrefcount') == True",
reason="sys.getrefcount() is needed to pass this test",
)
def test_unpacker_hook_refcnt():
result = []
def hook(x):
result.append(x)
return x
basecnt = sys.getrefcount(hook)
up = Unpacker(object_hook=hook, list_hook=hook)
assert sys.getrefcount(hook) >= basecnt + 2
up.feed(packb([{}]))
up.feed(packb([{}]))
assert up.unpack() == [{}]
assert up.unpack() == [{}]
assert result == [{}, [{}], {}, [{}]]
del up
assert sys.getrefcount(hook) == basecnt
def test_unpacker_ext_hook():
class MyUnpacker(Unpacker):
def __init__(self):
super().__init__(ext_hook=self._hook, raw=False)
def _hook(self, code, data):
if code == 1:
return int(data)
else:
return ExtType(code, data)
unpacker = MyUnpacker()
unpacker.feed(packb({"a": 1}))
assert unpacker.unpack() == {"a": 1}
unpacker.feed(packb({"a": ExtType(1, b"123")}))
assert unpacker.unpack() == {"a": 123}
unpacker.feed(packb({"a": ExtType(2, b"321")}))
assert unpacker.unpack() == {"a": ExtType(2, b"321")}
def test_unpacker_tell():
objects = 1, 2, "abc", "def", "ghi"
packed = b"\x01\x02\xa3abc\xa3def\xa3ghi"
positions = 1, 2, 6, 10, 14
unpacker = Unpacker(BytesIO(packed))
for obj, unp, pos in zip(objects, unpacker, positions):
assert obj == unp
assert pos == unpacker.tell()
def test_unpacker_tell_read_bytes():
objects = 1, "abc", "ghi"
packed = b"\x01\x02\xa3abc\xa3def\xa3ghi"
raw_data = b"\x02", b"\xa3def", b""
lenghts = 1, 4, 999
positions = 1, 6, 14
unpacker = Unpacker(BytesIO(packed))
for obj, unp, pos, n, raw in zip(objects, unpacker, positions, lenghts, raw_data):
assert obj == unp
assert pos == unpacker.tell()
assert unpacker.read_bytes(n) == raw

View file

@ -1,16 +0,0 @@
#!/usr/bin/env python
# coding: utf-8
from nose import main
from nose.tools import *
from msgpack import packb, unpackb
def test_unpack_buffer():
from array import array
buf = array('b')
buf.fromstring(packb(('foo', 'bar')))
obj = unpackb(buf)
assert_equal((b'foo', b'bar'), obj)
if __name__ == '__main__':
main()

View file

@ -1,105 +0,0 @@
#!/usr/bin/env python
# coding: utf-8
from nose import main
from nose.tools import *
from msgpack import packs, unpacks
def check(length, obj):
v = packs(obj)
assert_equal(len(v), length, "%r length should be %r but get %r" % (obj, length, len(v)))
assert_equal(unpacks(v), obj)
def test_1():
for o in [None, True, False, 0, 1, (1 << 6), (1 << 7) - 1, -1,
-((1<<5)-1), -(1<<5)]:
check(1, o)
def test_2():
for o in [1 << 7, (1 << 8) - 1,
-((1<<5)+1), -(1<<7)
]:
check(2, o)
def test_3():
for o in [1 << 8, (1 << 16) - 1,
-((1<<7)+1), -(1<<15)]:
check(3, o)
def test_5():
for o in [1 << 16, (1 << 32) - 1,
-((1<<15)+1), -(1<<31)]:
check(5, o)
def test_9():
for o in [1 << 32, (1 << 64) - 1,
-((1<<31)+1), -(1<<63),
1.0, 0.1, -0.1, -1.0]:
check(9, o)
def check_raw(overhead, num):
check(num + overhead, b" " * num)
def test_fixraw():
check_raw(1, 0)
check_raw(1, (1<<5) - 1)
def test_raw16():
check_raw(3, 1<<5)
check_raw(3, (1<<16) - 1)
def test_raw32():
check_raw(5, 1<<16)
def check_array(overhead, num):
check(num + overhead, (None,) * num)
def test_fixarray():
check_array(1, 0)
check_array(1, (1 << 4) - 1)
def test_array16():
check_array(3, 1 << 4)
check_array(3, (1<<16)-1)
def test_array32():
check_array(5, (1<<16))
def match(obj, buf):
assert_equal(packs(obj), buf)
assert_equal(unpacks(buf), obj)
def test_match():
cases = [
(None, b'\xc0'),
(False, b'\xc2'),
(True, b'\xc3'),
(0, b'\x00'),
(127, b'\x7f'),
(128, b'\xcc\x80'),
(256, b'\xcd\x01\x00'),
(-1, b'\xff'),
(-33, b'\xd0\xdf'),
(-129, b'\xd1\xff\x7f'),
({1:1}, b'\x81\x01\x01'),
(1.0, b"\xcb\x3f\xf0\x00\x00\x00\x00\x00\x00"),
((), b'\x90'),
(tuple(range(15)),b"\x9f\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e"),
(tuple(range(16)),b"\xdc\x00\x10\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"),
({}, b'\x80'),
(dict([(x,x) for x in range(15)]), b'\x8f\x00\x00\x01\x01\x02\x02\x03\x03\x04\x04\x05\x05\x06\x06\x07\x07\x08\x08\t\t\n\n\x0b\x0b\x0c\x0c\r\r\x0e\x0e'),
(dict([(x,x) for x in range(16)]), b'\xde\x00\x10\x00\x00\x01\x01\x02\x02\x03\x03\x04\x04\x05\x05\x06\x06\x07\x07\x08\x08\t\t\n\n\x0b\x0b\x0c\x0c\r\r\x0e\x0e\x0f\x0f'),
]
for v, p in cases:
match(v, p)
def test_unicode():
assert_equal(b'foobar', unpacks(packs('foobar')))
if __name__ == '__main__':
main()

View file

@ -1,14 +0,0 @@
#!/usr/bin/env python
# coding: utf-8
from nose.tools import *
from msgpack import packs, unpacks
import datetime
def test_raise_on_find_unsupported_value():
assert_raises(TypeError, packs, datetime.datetime.now())
if __name__ == '__main__':
from nose import main
main()

View file

@ -1,75 +0,0 @@
#!/usr/bin/env python
# coding: utf-8
from nose import main
from nose.tools import *
from msgpack import unpacks
def check(src, should):
assert_equal(unpacks(src), should)
def testSimpleValue():
check(b"\x93\xc0\xc2\xc3",
(None, False, True,))
def testFixnum():
check(b"\x92\x93\x00\x40\x7f\x93\xe0\xf0\xff",
((0,64,127,), (-32,-16,-1,),)
)
def testFixArray():
check(b"\x92\x90\x91\x91\xc0",
((),((None,),),),
)
def testFixRaw():
check(b"\x94\xa0\xa1a\xa2bc\xa3def",
(b"", b"a", b"bc", b"def",),
)
def testFixMap():
check(
b"\x82\xc2\x81\xc0\xc0\xc3\x81\xc0\x80",
{False: {None: None}, True:{None:{}}},
)
def testUnsignedInt():
check(
b"\x99\xcc\x00\xcc\x80\xcc\xff\xcd\x00\x00\xcd\x80\x00"
b"\xcd\xff\xff\xce\x00\x00\x00\x00\xce\x80\x00\x00\x00"
b"\xce\xff\xff\xff\xff",
(0, 128, 255, 0, 32768, 65535, 0, 2147483648, 4294967295,),
)
def testSignedInt():
check(b"\x99\xd0\x00\xd0\x80\xd0\xff\xd1\x00\x00\xd1\x80\x00"
b"\xd1\xff\xff\xd2\x00\x00\x00\x00\xd2\x80\x00\x00\x00"
b"\xd2\xff\xff\xff\xff",
(0, -128, -1, 0, -32768, -1, 0, -2147483648, -1,))
def testRaw():
check(b"\x96\xda\x00\x00\xda\x00\x01a\xda\x00\x02ab\xdb\x00\x00"
b"\x00\x00\xdb\x00\x00\x00\x01a\xdb\x00\x00\x00\x02ab",
(b"", b"a", b"ab", b"", b"a", b"ab"))
def testArray():
check(b"\x96\xdc\x00\x00\xdc\x00\x01\xc0\xdc\x00\x02\xc2\xc3\xdd\x00"
b"\x00\x00\x00\xdd\x00\x00\x00\x01\xc0\xdd\x00\x00\x00\x02"
b"\xc2\xc3",
((), (None,), (False,True), (), (None,), (False,True))
)
def testMap():
check(
b"\x96"
b"\xde\x00\x00"
b"\xde\x00\x01\xc0\xc2"
b"\xde\x00\x02\xc0\xc2\xc3\xc2"
b"\xdf\x00\x00\x00\x00"
b"\xdf\x00\x00\x00\x01\xc0\xc2"
b"\xdf\x00\x00\x00\x02\xc0\xc2\xc3\xc2",
({}, {None: False}, {True: False, None: False}, {},
{None: False}, {True: False, None: False}))
if __name__ == '__main__':
main()

View file

@ -1,44 +0,0 @@
#!/usr/bin/env python
# coding: utf-8
from nose import main
from nose.tools import *
from msgpack import packs, unpacks
def _decode_complex(obj):
if b'__complex__' in obj:
return complex(obj[b'real'], obj[b'imag'])
return obj
def _encode_complex(obj):
if isinstance(obj, complex):
return {b'__complex__': True, b'real': 1, b'imag': 2}
return obj
def test_encode_hook():
packed = packs([3, 1+2j], default=_encode_complex)
unpacked = unpacks(packed)
eq_(unpacked[1], {b'__complex__': True, b'real': 1, b'imag': 2})
def test_decode_hook():
packed = packs([3, {b'__complex__': True, b'real': 1, b'imag': 2}])
unpacked = unpacks(packed, object_hook=_decode_complex)
eq_(unpacked[1], 1+2j)
@raises(ValueError)
def test_bad_hook():
packed = packs([3, 1+2j], default=lambda o: o)
unpacked = unpacks(packed)
def _arr_to_str(arr):
return ''.join(str(c) for c in arr)
def test_array_hook():
packed = packs([1,2,3])
unpacked = unpacks(packed, list_hook=_arr_to_str)
eq_(unpacked, '123')
if __name__ == '__main__':
#main()
test_decode_hook()

View file

@ -1,82 +0,0 @@
#!/usr/bin/env python
# coding: utf-8
from nose import main
from nose.tools import *
from msgpack import packs, unpacks, Unpacker, Packer
from io import BytesIO
def check(data):
re = unpacks(packs(data))
assert_equal(re, data)
def testPack():
test_data = [
0, 1, 127, 128, 255, 256, 65535, 65536,
-1, -32, -33, -128, -129, -32768, -32769,
1.0,
b"", b"a", b"a"*31, b"a"*32,
None, True, False,
(), ((),), ((), None,),
{None: 0},
(1<<23),
]
for td in test_data:
check(td)
def testPackUnicode():
test_data = [
"", "abcd", ("defgh",), "Русский текст",
]
for td in test_data:
re = unpacks(packs(td, encoding='utf-8'), encoding='utf-8')
assert_equal(re, td)
packer = Packer(encoding='utf-8')
data = packer.pack(td)
re = Unpacker(BytesIO(data), encoding='utf-8').unpack()
assert_equal(re, td)
def testPackUTF32():
test_data = [
"", "abcd", ("defgh",), "Русский текст",
]
for td in test_data:
re = unpacks(packs(td, encoding='utf-32'), encoding='utf-32')
assert_equal(re, td)
def testPackBytes():
test_data = [
b"", b"abcd", (b"defgh",),
]
for td in test_data:
check(td)
def testIgnoreUnicodeErrors():
re = unpacks(packs(b'abc\xeddef'),
encoding='utf-8', unicode_errors='ignore')
assert_equal(re, "abcdef")
@raises(UnicodeDecodeError)
def testStrictUnicodeUnpack():
unpacks(packs(b'abc\xeddef'), encoding='utf-8')
@raises(UnicodeEncodeError)
def testStrictUnicodePack():
packs("abc\xeddef", encoding='ascii', unicode_errors='strict')
def testIgnoreErrorsPack():
re = unpacks(packs("abcФФФdef", encoding='ascii', unicode_errors='ignore'), encoding='utf-8')
assert_equal(re, "abcdef")
@raises(TypeError)
def testNoEncoding():
packs("abc", encoding=None)
def testDecodeBinary():
re = unpacks(packs("abc"), encoding=None)
assert_equal(re, b"abc")
if __name__ == '__main__':
main()

View file

@ -1,36 +0,0 @@
#!/usr/bin/env python
# coding: utf-8
from msgpack import Unpacker
def test_foobar():
unpacker = Unpacker(read_size=3)
unpacker.feed(b'foobar')
assert unpacker.unpack() == ord(b'f')
assert unpacker.unpack() == ord(b'o')
assert unpacker.unpack() == ord(b'o')
assert unpacker.unpack() == ord(b'b')
assert unpacker.unpack() == ord(b'a')
assert unpacker.unpack() == ord(b'r')
try:
o = unpacker.unpack()
print(("Oops!", o))
assert 0
except StopIteration:
assert 1
else:
assert 0
unpacker.feed(b'foo')
unpacker.feed(b'bar')
k = 0
for o, e in zip(unpacker, b'foobarbaz'):
assert o == e
k += 1
assert k == len(b'foobar')
if __name__ == '__main__':
test_foobar()