2
1
mirror of https://github.com/qpdf/qpdf.git synced 2024-11-02 03:42:30 +00:00
Commit Graph

2565 Commits

Author SHA1 Message Date
Jay Berkenbilt
9b2eb01e25 Exercise object description in tests 2022-05-20 14:23:32 -04:00
Jay Berkenbilt
6c2fb5b8f0 Add test for bad data and bad datafile 2022-05-20 13:33:30 -04:00
Jay Berkenbilt
d065098089 Test --update-from-json 2022-05-20 11:10:12 -04:00
Jay Berkenbilt
ef955b04b5 Bug fix: don't clobber stream length with replaceDict 2022-05-20 11:09:45 -04:00
Jay Berkenbilt
3eb77a7004 JSON: detect duplicate dictionary keys while parsing 2022-05-20 10:13:15 -04:00
Jay Berkenbilt
6d4e3ba8a4 Test (and fix) handling of dangling references 2022-05-20 09:16:25 -04:00
Jay Berkenbilt
5a2aa59479 Bug fix: isReserved() true for indirect reference to reserved object 2022-05-20 09:16:25 -04:00
Jay Berkenbilt
35b1e1c493 Explicitly test ignoring unknown keys in JSON input 2022-05-20 09:16:25 -04:00
Jay Berkenbilt
dc8df962d8 Make version default to latest for --json-output (like --json) 2022-05-20 09:16:25 -04:00
Jay Berkenbilt
907df2c823 Round-trip tests with --json-stream-data=file 2022-05-20 09:16:25 -04:00
Jay Berkenbilt
a83b7b0611 Tests with manually constructed qpdf json 2022-05-20 09:16:25 -04:00
Jay Berkenbilt
7f8c4b183d Add tests for --json-input 2022-05-20 09:16:25 -04:00
Jay Berkenbilt
6c7326b290 JSON fix: correctly parse UTF-16 surrogate pairs 2022-05-20 09:16:25 -04:00
Jay Berkenbilt
1ec561daa4 Add more names and strings in good13
* native UTF-8 strings
* names whose PDF and canonical syntax differ in both dictionary key
  positions and other positions

For json, names are converted both as names and directly when used as
dictionary keys.
2022-05-20 09:16:25 -04:00
Jay Berkenbilt
6c5e590673 Rename all test files: _ to - 2022-05-20 09:16:25 -04:00
Jay Berkenbilt
6f43bf8de3 Major rework -- see long comments
* Replace --create-from-json=file with --json-input, which causes the
  regular input to be treated as json.
* Eliminate --to-json
* In --json=2, bring back "objects" and eliminate "objectinfo". Stream
  data is never present.
* In --json-output=2, write "qpdf-v2" with "objects" and include
  stream data.
2022-05-20 09:16:25 -04:00
Jay Berkenbilt
23fc6756f1 Add QUtil::FileCloser to the public API 2022-05-20 09:16:25 -04:00
Jay Berkenbilt
0fe8d44762 Support stream data -- not tested
There are no automated tests yet, but committing work so far in
preparation for some refactoring.
2022-05-20 09:16:25 -04:00
Jay Berkenbilt
63c7eefe9d replaceStreamData: accept uninitialized filter/decode_parms
These mean to leave the original values alone. This is needed for
reconstructing streams from JSON given that the stream data and stream
dictionary may appear in any order in the JSON.
2022-05-20 09:16:25 -04:00
Jay Berkenbilt
56f1b411fe Back out fluent QPDFObjectHandle methods. Keep the andGet methods.
I decided these were confusing and inconsistent with how JSON works.
They muddle the API rather than improving it.
2022-05-20 09:16:25 -04:00
Jay Berkenbilt
7e7a9c4379 Parse objects; stream data is not yet handled 2022-05-20 09:16:25 -04:00
Jay Berkenbilt
be0ed6ab5e Add new error type for JSON 2022-05-20 07:54:09 -04:00
Jay Berkenbilt
9064542b5f Add private methods for reserving specific objects 2022-05-20 07:54:09 -04:00
Jay Berkenbilt
7fa5d1773b Implement top-level qpdf json parsing 2022-05-16 13:41:40 -04:00
Jay Berkenbilt
8d42eb2632 Add scaffolding for QPDF JSON reactor 2022-05-16 13:41:40 -04:00
Jay Berkenbilt
4fe2e06b47 Add --create-from-json and --update-from-json arguments
Also add stubs for top-level QPDF methods (createFromJSON,
updateFromJSON)
2022-05-16 13:41:40 -04:00
Jay Berkenbilt
ed6130036c TODO: solidify work for JSON to PDF 2022-05-16 13:41:40 -04:00
Jay Berkenbilt
9a0e9a1a9e Remove offset from missing /Root error
The last offset is irrelevant to not being able to find /Root.
2022-05-16 13:39:26 -04:00
Jay Berkenbilt
051ae7c282 Improve handling of replacing stream data with empty strings
When an empty string was passed to replaceStreamData, the code was
passing a null pointer to memcpy. Since a 0 size was also passed, this
was harmless, but it triggers sanitizer errors. The code properly
handles a null pointer as the buffer in other places.
2022-05-16 13:39:26 -04:00
Jay Berkenbilt
60ec94a7c3 Add QUtil::is_long_long 2022-05-16 13:39:26 -04:00
Jay Berkenbilt
173b944ef8 Split qpdf.test into multiple test suites
This makes it a lot easier to run parts of the test suite.
2022-05-14 17:35:06 -04:00
Jay Berkenbilt
4b642caf11 Update qtest-driver to log invalid tests
This is taken from an unrelased change to qtest.
2022-05-14 17:35:06 -04:00
Jay Berkenbilt
4c7cfd5cbc JSON reactor: improve handling of nested containers
Call the parent container's item method before calling the child
item's start method so we can easily know the current nesting level
when nested items are added.
2022-05-14 17:35:06 -04:00
Jay Berkenbilt
2a2f7f1bba Add maxobjectid to JSON 2022-05-08 13:45:20 -04:00
Jay Berkenbilt
02e8ef6fd9 TODO note about linux binary distribution runpath 2022-05-08 13:45:20 -04:00
Jay Berkenbilt
e9390aeaaa Add --to-json option 2022-05-08 13:45:20 -04:00
Jay Berkenbilt
2e87d593eb Test inline stream data with different decode levels 2022-05-08 13:45:20 -04:00
Jay Berkenbilt
f08f398920 Test json v2 with invalid stream data 2022-05-08 13:45:20 -04:00
Jay Berkenbilt
c76536dd9a Implement JSON v2 output 2022-05-08 13:45:20 -04:00
Jay Berkenbilt
bdfc4da510 Apply script across future v2 test files
There is one unexpected pass in this commit. This script was applied
to the files changed in this commit:

----------
#!/usr/bin/env python3
import json
import sys

def json_dumps(data):
    return json.dumps(data, ensure_ascii=False,
                      indent=2, separators=(',', ': '))

for filename in sys.argv[1:]:
    with open(filename, 'r') as f:
        data = json.loads(f.read())
    data['version'] = 2
    objectinfo = {}
    if 'objectinfo' in data:
        objectinfo = data['objectinfo']
        del data['objectinfo']
    if 'objects' not in data:
        continue
    qpdf = {'jsonversion': 2, 'pdfversion': '1.3', 'objects': {}}
    for k, v in data['objects'].items():
        is_stream = objectinfo.get(k, {}).get('stream', {}).get('is', False)
        if k.endswith(' R'):
            k = 'obj:' + k
        if is_stream:
            v = {'stream': {'dict': v}}
        else:
            v = {'value': v}
        qpdf['objects'][k] = v
    data['qpdf'] = qpdf
    del data['objects']
print(json_dumps(data))
----------
2022-05-08 13:45:20 -04:00
Jay Berkenbilt
8d348974aa Prepare test suite for json v2 2022-05-08 13:45:20 -04:00
Jay Berkenbilt
15272662f6 Fix typo in json output key name
moddify -> modify. Also carefully spell checked all remaining keys by
splitting them into words and running a spell checker, not just
relying on visual proofreading. That was the only one.
2022-05-08 13:45:20 -04:00
Jay Berkenbilt
1bc8abfdd3 Implement JSON v2 for Stream
Not fully exercised in this commit
2022-05-08 13:45:20 -04:00
Jay Berkenbilt
3246923cf2 Implement JSON v2 for String
Also refine the herustic for deciding whether to use hexadecimal
notation for a string.
2022-05-08 13:45:20 -04:00
Jay Berkenbilt
16f4f94cd9 Prepare code for JSON v2
Update getJSON() methods and calls to them
2022-05-07 11:12:01 -04:00
Jay Berkenbilt
a9fbbd5dca Objectinfo json: write incrementally and in numeric order
This script was used on test data:

----------
#!/usr/bin/env python3
import json
import sys
import re

def json_dumps(data):
    return json.dumps(data, ensure_ascii=False,
                      indent=2, separators=(',', ': '))

for filename in sys.argv[1:]:
    with open(filename, 'r') as f:
        data = json.loads(f.read())
    if 'objectinfo' not in data:
        continue
    trailer = None
    to_sort = []
    for k, v in data['objectinfo'].items():
        if k == 'trailer':
            trailer = v
        else:
            m = re.match(r'^(\d+) \d+ R', k)
            if m:
                to_sort.append([int(m.group(1)), k, v])
    newobjectinfo = {x[1]: x[2] for x in sorted(to_sort)}
    if trailer is not None:
        newobjectinfo['trailer'] = trailer
    data['objectinfo'] = newobjectinfo
print(json_dumps(data))
----------
2022-05-07 08:26:31 -04:00
Jay Berkenbilt
948de60990 Objects json: write incrementally and in numeric order
The following script was used to adjust test data:

----------
#!/usr/bin/env python3
import json
import sys
import re

def json_dumps(data):
    return json.dumps(data, ensure_ascii=False,
                      indent=2, separators=(',', ': '))

for filename in sys.argv[1:]:
    with open(filename, 'r') as f:
        data = json.loads(f.read())
    if 'objects' not in data:
        continue
    trailer = None
    to_sort = []
    for k, v in data['objects'].items():
        if k == 'trailer':
            trailer = v
        else:
            m = re.match(r'^(\d+) \d+ R', k)
            if m:
                to_sort.append([int(m.group(1)), k, v])
    newobjects = {x[1]: x[2] for x in sorted(to_sort)}
    if trailer is not None:
        newobjects['trailer'] = trailer
    data['objects'] = newobjects
print(json_dumps(data))
----------
2022-05-07 08:26:31 -04:00
Jay Berkenbilt
f50274ef46 Pages json: write each page incrementally 2022-05-07 08:26:31 -04:00
Jay Berkenbilt
1615d7feaf Make JSON::writeNext public 2022-05-07 08:26:31 -04:00
Jay Berkenbilt
dc9b7287cd Top-level json: write incrementally
This commit just changes the order in which fields are written to the
json without changing their content. All the json files in the test
suite were modified with this script to ensure that we didn't get any
changes other than ordering.

----------
#!/usr/bin/env python3
import json
import sys

def json_dumps(data):
    return json.dumps(data, ensure_ascii=False,
                      indent=2, separators=(',', ': '))

for filename in sys.argv[1:]:
    with open(filename, 'r') as f:
        data = json.loads(f.read())
    newdata = {}
    for i in ('version', 'parameters', 'pages', 'pagelabels',
              'acroform', 'attachments', 'encrypt', 'outlines',
              'objects', 'objectinfo'):
        if i in data:
            newdata[i] = data[i]
print(json_dumps(newdata))
----------
2022-05-07 08:26:31 -04:00