mirror of
https://github.com/qpdf/qpdf.git
synced 2024-12-22 10:58:58 +00:00
1042 lines
45 KiB
Plaintext
1042 lines
45 KiB
Plaintext
|
|
Next
|
|
====
|
|
|
|
* At next release, hide release-qpdf-10.6.3.0cmake* versions at readthedocs
|
|
* Stay on top of https://github.com/pikepdf/pikepdf/pull/315
|
|
|
|
In order:
|
|
* json v2
|
|
|
|
Other (do in any order):
|
|
|
|
* Make job JSON accept a single element and treat as an array of one
|
|
when an array is expected. This allows for making things repeatable
|
|
in the future without breaking compatibility and is needed for the
|
|
remote-attachment fix to be backward-compatible.
|
|
* Add an option --ignore-encryption to ignore encryption information
|
|
and treat encrypted files as if they weren't encrypted. This should
|
|
make it possible to solve #598 (--show-encryption without a
|
|
password). We'll need to make sure we don't try to filter any
|
|
streams in this mode. Ideally we should be able to combine this with
|
|
--json so we can look at the raw encrypted strings and streams if we
|
|
want to. Since providing the password may reveal additional details,
|
|
--show-encryption could potentially retry with this option if the
|
|
first time doesn't work. Then, with the file open, we can read the
|
|
encryption dictionary normally.
|
|
* Find all places in the code that write to std::cout, std::err,
|
|
stdout, or stderr to make sure they obey default output stream
|
|
settings for QPDF and QPDFJob. This probably includes adding a
|
|
Pl_Ostream pipeline.
|
|
* Nice to have:
|
|
* Split qpdf.test into multiple tests
|
|
* In libtests, separate executables that need the object library
|
|
from those that strictly use public API. Move as many of the test
|
|
drivers from the qpdf directory into the latter category as long
|
|
as doing so isn't too troublesome from a coverage standpoint.
|
|
* Rework tests so that nothing is written into the source directory.
|
|
* Ideally then the entire build could be done with a read-only
|
|
source tree.
|
|
|
|
Soon: Break ground on "Document-level work"
|
|
|
|
Output JSON v2
|
|
==============
|
|
|
|
----
|
|
notes from 5/2:
|
|
|
|
Need new pipelines:
|
|
* Pl_OStream(std::ostream) with semantics like Pl_StdioFile
|
|
* Pl_String to std::string with semantics like Pl_Buffer
|
|
|
|
New Pipeline methods:
|
|
* writeString(std::string const&)
|
|
* writeCString(char*)
|
|
* writeChars(char*, size_t)
|
|
|
|
* Consider templated operator<< which could specialize for char* and
|
|
std::string and could use std::ostringstream otherwise
|
|
|
|
See if I can change all output and error messages issued by the
|
|
library, when context is available, to have a pipeline rather than a
|
|
FILE* or std::ostream. This makes it possible for people to capture
|
|
output more flexibly.
|
|
|
|
JSON: rather than unparse() -> string, there should be write method
|
|
that takes a pipeline and a depth. Then rewrite all the unparse
|
|
methods to use it. This makes incremental write possible as well as
|
|
writing arbitrarily large amounts of output.
|
|
|
|
JSON::parse should work from an InputSource. BufferInputSource can
|
|
already start with a std::string.
|
|
|
|
Have a json blob defined by a function that takes a pipeline and
|
|
writes data to the pipeline. It's writer should create a Pl_Base64 ->
|
|
Pl_Concatenate in front of the pipeline passed to write and call the
|
|
function with that.
|
|
|
|
Add methods needed to do incremental writes. Basically we need to
|
|
expose functionality the array and dictionary unparse methods. Maybe
|
|
we can have a DictionaryWriter and an ArrayWriter that deal with the
|
|
first/depth logic and have writeElement or writeEntry(key, value)
|
|
methods.
|
|
|
|
For json output, do not unparse to string. Use the writers instead.
|
|
Write incrementally. This changes ordering only, but we should be able
|
|
manually update the test output for those cases. Objects should be
|
|
written in numerical order, not lexically sorted. It probably makes
|
|
sense to put the trailer at the end since that's where it is in a
|
|
regular PDF.
|
|
|
|
When we get to full serialization, add json serialization performance
|
|
test.
|
|
|
|
Some if not all of the json output functionality for v2 should move
|
|
into QPDF proper rather than living in QPDFJob. There can be a
|
|
top-level QPDF method that takes a pipeline and writes the JSON
|
|
serialization to it.
|
|
|
|
Decide what the API/CLI will be for serializing to v2. Will it just be
|
|
part of --json or will it be its own separate thing? Probably we
|
|
should make it so that a serialized PDF is different but uses the same
|
|
object format as regular json mode.
|
|
|
|
For going back from JSON to PDF, a separate utility will be needed.
|
|
It's not practical for QPDFObjectHandle to be able to read JSON
|
|
because of the special handling that is required for indirect objects,
|
|
and QPDF can't just accept JSON because the way InputSource is used is
|
|
complete different. Instead, we will need a separate utility that has
|
|
logic similar to what copyForeignObject does. It will go something
|
|
like this:
|
|
|
|
* Create an empty QPDF (not emptyPDF, one with no objects in it at
|
|
all). This works:
|
|
|
|
```
|
|
%PDF-1.3
|
|
xref
|
|
0 1
|
|
0000000000 65535 f
|
|
trailer << /Size 1 >>
|
|
startxref
|
|
9
|
|
%%EOF
|
|
```
|
|
|
|
For each object:
|
|
|
|
* Walk through the object detecting any indirect objects. For each one
|
|
that is not already known, reserve the object. We can also validate
|
|
but we should try to do the best we can with invalid JSON so people
|
|
can get good error messages.
|
|
* Construct a QPDFObjectHandle from the JSON
|
|
* If the object is the trailer, update the trailer
|
|
* Else if the object doesn't exist, reserve it
|
|
* If the object is reserved, call replaceReserved()
|
|
* Else the object already exists; this is an error.
|
|
|
|
This can almost be done through public API. I think all we need is the
|
|
ability to create a reserved object with a specific object ID.
|
|
|
|
The choices for json_key (job.yml) will be different for v1 and v2.
|
|
That information is already duplicated in multiple places.
|
|
|
|
----
|
|
|
|
Remember typo: search for "Typo" In QPDFJob::doJSONEncrypt.
|
|
|
|
Remember to test interaction between generators and schemas.
|
|
|
|
Should I have allowed array and object generators? Or maybe just
|
|
string generators for stream data?
|
|
|
|
When switching to generators for output, it's going to be very
|
|
important not to break the logic around having things that look at all
|
|
objects going first. Right now, there are good tests for it -- if you
|
|
either comment out pushInheritedAttributesToPage or do something that
|
|
postpones serializing the objects from allObjects (or even getting
|
|
them), you get test failures either way. However, if we were to
|
|
blindly overwrite test files, we might accidentally lose this. We will
|
|
have to try to get most of the logic working before trying to use
|
|
generators. Or maybe we shouldn't use generators at all for the
|
|
objects and only use it for the stream data. Or maybe we can use
|
|
generators but write it out early by exposing the depth() parameter.
|
|
That might actually the safest way to do it. But that will be hard
|
|
with schemas. Another thing might be to not combine serializing with
|
|
other kinds of metadata.
|
|
|
|
Output JSON v2 will contain enough information to completely recreate
|
|
a PDF file. In other words, qpdf will have full, bidirectional,
|
|
lossless json serialization/deserialization of PDF.
|
|
|
|
If this is done, update --json option in cli.rst to mention v2. Also
|
|
update QPDFJob::Config::json and of course other parts of the docs
|
|
(json.rst).
|
|
|
|
You can't create a PDF from v1 json because
|
|
|
|
* The PDF version header is not recorded
|
|
|
|
* Strings cannot be unambiguously encoded/decoded
|
|
|
|
* Can't tell string from name from indirect object
|
|
|
|
* Strings are treated as PDF doc encoding and output as UTF-8, which
|
|
doesn't work since multiple PDF doc code points are undefined
|
|
|
|
* There is no representation of stream data
|
|
|
|
* You can't tell a stream from a dictionary except by looking in both
|
|
"object" and "objectinfo". Fix this, and then remove "objectinfo".
|
|
|
|
Additionally, using "n n R" as a key in "objects" and "objectinfo"
|
|
messes up searching for things.
|
|
|
|
For json v2:
|
|
|
|
* Make sure it is possible to serialize and deserializes a PDF to JSON
|
|
without loading the whole thing into memory.
|
|
|
|
* As with a regular PDF, we can load everything into memory at once
|
|
except stream data.
|
|
|
|
* I think we can do this by having the concept of generated values,
|
|
which we can make just be strings. We would have a JSON subclass
|
|
whose value is a lambda that gets called to generate output. When
|
|
we construct the JSON the stream values would be lambda functions
|
|
that generate the stream data.
|
|
|
|
* When we parse the file, we'll have to have a way for the parser to
|
|
know that it should create a lambda that reads the data from the
|
|
file. I think this means we want something that parses JSON from
|
|
an input source. It would have to keep track of the offset and
|
|
length of a value from the input source and have a (probably a
|
|
lambda that it can call with a path) that would indicate whether
|
|
to store the value or whether to create a lambda that retrieves
|
|
it. We would have to keep a std::shared_ptr<InputSource> around.
|
|
|
|
* Add json to the large file tests.
|
|
|
|
* Resolve differences between information shown in the json format vs.
|
|
information shown with options like --check, --list-attachments,
|
|
etc. The json format should be able to completely replace things
|
|
that write to stdout. Be sure getAllPages() and other top-level
|
|
convenience routines are there so people don't need to parse the
|
|
pages tree themselves. For many workflows, it should be possible for
|
|
someone to work in the json file based on json metadata rather than
|
|
calling the QPDF API. (Of course, you still need the QPDF API for
|
|
higher level helper objects.)
|
|
|
|
* Consider using camelCase in multi-word key names to be consistent
|
|
with job JSON and with how JSON is often represented in languages
|
|
that use it more natively.
|
|
|
|
* Consider changing the contract to allow fields to be absent even
|
|
when present in the schema. It's reasonable for people to check for
|
|
presence of a key. Most languages make this easy to do.
|
|
|
|
* If we allow --json to be mixed with --ignore-encryption, we must
|
|
emphasize that the resulting json can't be turned back into a valid
|
|
PDF.
|
|
|
|
Most things that are informational can stay the same. We will have to
|
|
go through every item to decide for sure, especially when camelCase is
|
|
taken into consideration.
|
|
|
|
New APIs:
|
|
|
|
QPDFObjectHandle::parseJSON(QPDF* context, JSON);
|
|
QPDFObjectHandle::parseJSON(QPDF* context, std::string const&);
|
|
operator ""_qpdf_json
|
|
C API to create a QPDFObjectHandle from a json string
|
|
|
|
JSON::parseFile
|
|
QPDF::parseJSON(JSON) (like parseFile, etc. -- deserializes json)
|
|
QPDF::updateFromJSON(JSON)
|
|
|
|
CLI: --infile-is-json -- indicate that the input is a qpdf json file
|
|
rather than a PDF file
|
|
CLI: --update-from-json=file.json
|
|
|
|
Have a "qpdf" key in the output that contains "jsonVersion",
|
|
"pdfVersion", and "objects". This replaces the "objects" field at the
|
|
top level. "objects" and "objectinfo" disappear from the top-level.
|
|
".version" and ".qpdf.jsonVersion" will match. The input to parseJSON
|
|
and updateFromJSON will have to have the "qpdf" key in it. All other
|
|
keys are ignored.
|
|
|
|
When creating from a JSON file, the JSON must be complete with data
|
|
for all streams, a trailer, and a pdfVersion. When updating from a
|
|
JSON:
|
|
|
|
* Any object whose value is null (not "value": null, but just null) is
|
|
deleted.
|
|
* For any stream that appears without stream data, the stream data is
|
|
left alone.
|
|
* Otherwise, the object from the JSON completely replaces the input
|
|
object. No dictionary merges or anything like that are performed.
|
|
It will call replaceObject.
|
|
|
|
Within .qpdf.objects, the key is "obj:o g R" or "obj:trailer", and the
|
|
value is a dictionary with exactly one of "value" or "stream" as its
|
|
single key.
|
|
|
|
Rationale of "obj:o g R" is that indirect object references are just
|
|
"o g R", and so code that wants to resolve one can do so easily by
|
|
just prepending "obj:" and not having to parse or split the string.
|
|
|
|
For non-streams:
|
|
|
|
{
|
|
"obj:o g R": {
|
|
"value": ...
|
|
}
|
|
}
|
|
|
|
For streams:
|
|
|
|
"obj:o g R": {
|
|
"stream": {
|
|
"dict": { ... stream dictionary ... },
|
|
"filterable": bool,
|
|
"raw": "base64-encoded raw data",
|
|
"filtered": "base64-encoded filtered data"
|
|
}
|
|
}
|
|
}
|
|
|
|
Wherever a PDF object appears in the JSON output, including "value"
|
|
and "stream"."dict" above as well as other places where they might
|
|
appear, objects are represented as follows:
|
|
|
|
* Arrays, dictionaries, booleans, nulls, integers, and real numbers
|
|
with no more than six decimal places are represented as their native
|
|
JSON type.
|
|
* Real numbers with more than six decimal places are represented as
|
|
"r:{real-value}".
|
|
* Names: "/Name" -- internal/canonical representation (e.g.
|
|
"/Text/Plain", not #xx quoted)
|
|
* Indirect objects: "n n R"
|
|
* Strings: one of
|
|
"s:json string treated as Unicode"
|
|
"b:json string treated as bytes; character > \u00ff is an error"
|
|
"e:base64-encoded bytes"
|
|
|
|
Test cases: these are the same:
|
|
* "b:\u00c8\u0080", "s:π", "s:\u03c0", and "e:z4A="
|
|
* "b:\u00d8\u003e\u00dd\u0054", "s:🥔", "s:\ud83e\udd54", and "e:8J+llA=="
|
|
|
|
When creating output from a string:
|
|
* If the string is explicitly unicode (UTF-8 or UTF-16), encode as
|
|
"s:" without the leading U+FEFF
|
|
* Else if the string can be bidirectionally mapped between pdf-doc and
|
|
unicode, transcode to unicode and encode as "s:"
|
|
* Else if the string would be decoded as binary, encode as "e:"
|
|
* Else encode as "b:"
|
|
|
|
When reading a string, any string that doesn't follow the above rules
|
|
is an error. This includes "r:" strings not parseable as a real
|
|
number, "/Name" strings containing a NUL character, "s:" or "b:"
|
|
strings that are not valid JSON strings, "b:" strings containing
|
|
character values > 0xff, or "e:" values that are not valid base64.
|
|
Once the string is read in, if the "s:" string can be bidirectionally
|
|
mapped between pdf-doc and unicode, store as PDFDoc. Otherwise store
|
|
as UTF-16BE. "b:" strings are stored as bytes, and "e:" are decoded
|
|
and stored as bytes.
|
|
|
|
Implementing this will require some refactoring of things between
|
|
QUtil and QPDF_String, plus we will need to implement a base64
|
|
encoder/decoder.
|
|
|
|
This enables a workflow like this:
|
|
|
|
* qpdf --json=latest infile.pdf > pdf.json
|
|
* modify pdf.json
|
|
* qpdf infile.pdf --update-from=pdf.json out.pdf
|
|
|
|
or
|
|
|
|
* qpdf --json=latest --json-stream-data=raw|filtered infile.pdf > pdf.json
|
|
* modify pdf.json
|
|
* qpdf pdf.json --infile-is-json out.pdf
|
|
|
|
Notes about streams and stream data:
|
|
|
|
* Always include "dict". "/Length" is removed from the stream
|
|
dictionary.
|
|
|
|
* Add new flag --json-stream-data={raw,filtered,none}. At most one of
|
|
"raw" and "filtered" will appear for each stream. If "filtered"
|
|
appears, "/Filter" and "/DecodeParms" are removed from the stream
|
|
dictionary. This makes the stream data and dictionary match for when
|
|
the file is read back in.
|
|
|
|
* Always include "filterable" regardless of value of
|
|
--json-stream-data. The value of filterable is influenced by
|
|
--decode-level, which is already in parameters.
|
|
|
|
* Add to parameters: value of json-stream-data, default is none
|
|
|
|
* If --json-stream-data=none, omit stream data entirely
|
|
|
|
* If --json-stream-data=raw, include raw stream data as base64. Show
|
|
the data even for unfiltered streams in "raw".
|
|
|
|
* If --json-stream-data=filtered, include the base64-encoded filtered
|
|
stream data if we can and should decode it based on decode-level.
|
|
Otherwise, include the base64-encoded raw data. See if we can honor
|
|
--normalize-content. If a stream appears unfiltered in the input,
|
|
still show it as filtered. Remove /DecodeParms and /Filter if
|
|
filtering.
|
|
|
|
Note that --json-stream-data=filtered is different from
|
|
--filtered-stream-data in that --filtered-stream-data implies
|
|
--decode-level=all while --json-stream-data=filtered does not. Make
|
|
sure this is mentioned in the help for both options.
|
|
|
|
QPDFJob
|
|
=======
|
|
|
|
Here are some ideas for QPDFJob that didn't make it into 10.6. Not all
|
|
of these are necessarily good -- just things to consider.
|
|
|
|
* replace mode: --replace-object, --replace-stream-raw,
|
|
--replace-stream-filtered
|
|
* update first paragraph of QPDF JSON in the manual to mention this
|
|
* object numbers are not preserved by write, so object ID lookup
|
|
has to be done separately for each invocation
|
|
* you don't have to specify length for streams
|
|
* you only have to specify filtering for streams if providing raw data
|
|
|
|
* Allow users to supply a custom progress reporter for QPDFJob
|
|
|
|
* Better interoperability with json output:
|
|
|
|
* Make sure all the things that print stuff to stdout have json
|
|
equivalents (check, showLinearizationData, etc.)
|
|
* There should be a way to get json output other than having it
|
|
print to stdout. It should be multi-language friendly and allow
|
|
for large amounts of data, such as providing a callback that qpdf
|
|
can write to (like a pipeline)
|
|
* See also JSON v2
|
|
|
|
* How do we chain jobs? The idea would be that the input and/or output
|
|
of a QPDFJob could be a QPDF object rather than a file. For input,
|
|
it's pretty easy. For output, none of the output-specific options
|
|
(encrypt, compress-streams, objects-streams, etc.) would have any
|
|
affect, so we would have to treat this like inspect for error
|
|
checking. The QPDF object in the state where it's ready to be sent
|
|
off to QPDFWriter would be used as the input to the next QPDFJob.
|
|
For the job json, I think we can have the output be an identifier
|
|
that can be used as the input for another QPDFJob. For a json file,
|
|
we could the top level detect if it's an array with the convention
|
|
that exactly one has an output, or we could have a subkey with other
|
|
job definitions or something. Ideally, any input
|
|
(copy-attachments-from, pages, etc.) could use a QPDF object. It
|
|
wouldn't surprise me if this exposes bugs in qpdf around foreign
|
|
streams as this has been a relatively fragile area before.
|
|
|
|
Documentation
|
|
=============
|
|
|
|
* Do a full pass through the documentation.
|
|
|
|
* Make sure `qpdf` is consistent. Use QPDF when just referring to
|
|
the package.
|
|
* Make sure markup is consistent
|
|
* Autogenerate where possible
|
|
* Consider which parts might be good candidates for moving to the
|
|
wiki.
|
|
|
|
* Commit 'Manual - enable line wrapping in table cells' from
|
|
Mon Jan 17 12:22:35 2022 +0000 enables table cell wrapping. See if
|
|
this can be incorporated directly into sphinx_rtd_theme and the
|
|
workaround can be removed.
|
|
|
|
* When possible, update the debian package to include docs again. See
|
|
https://bugs.debian.org/1004159 for details.
|
|
|
|
Document-level work
|
|
===================
|
|
|
|
* Ideas here may by superseded by #593.
|
|
|
|
* QPDFPageCopier -- object for moving pages around within files or
|
|
between files and performing various transformations. Reread/rewrite
|
|
_page-selection in the manual if needed.
|
|
|
|
* Handle all the stuff of pages and split-pages
|
|
* Do n-up, booklet, collation
|
|
* Look through cli and see what else...flatten-*?
|
|
* See comments in QPDFPageDocumentHelper.hh for addPage -- search
|
|
for "a future version".
|
|
* Make it efficient for bulk operations
|
|
* Make certain doc-level features selectable
|
|
* qpdf.cc should do all its page operations, including
|
|
overlay/underlay, splitting, and merging, using this
|
|
* There should also be example code
|
|
|
|
* After doc-level checks are in, call --check on the output files in
|
|
the "Copy Annotations" tests.
|
|
|
|
* Document-level checks. For example, for forms, make sure all form
|
|
fields point to an annotation on exactly one page as well as that
|
|
all widget annotations are associated with a form field. Hook this
|
|
into QPDFPageCopier as well as the doc helpers. Make sure it is
|
|
called from --check.
|
|
|
|
* See also issues tagged with "pages"
|
|
|
|
* Add flags to CLI to select which document-level options to
|
|
preserve or not preserve. We will probably need a pair of mutually
|
|
exclusive, repeatable options with a way to specify all, none, only
|
|
{x,y}, or all but {x,y}.
|
|
|
|
* If a page contains a reference a file attachment annotation, when
|
|
that page is copied, if the file attachment appears in the top-level
|
|
EmbeddedFiles tree, that entry should be preserved in the
|
|
destination file. Otherwise, we probably will require the use of
|
|
--copy-attachments-from to preserve these. What will the strategy be
|
|
for deduplicating in the automatic case?
|
|
|
|
Text Appearance Streams
|
|
=======================
|
|
|
|
This is a list of known issues with text appearance streams and things
|
|
we might do about it.
|
|
|
|
* For variable text, the spec says to pull any resources from /DR that
|
|
are referenced in /DA but if the resource dictionary already has
|
|
that resource, just use the one that's there. The current code looks
|
|
only for /Tf and adds it if needed. We might want to instead merge
|
|
/DR with resources and then remove anything that's unreferenced. We
|
|
have all the code required for that in ResourceFinder except
|
|
TfFinder also gets the font size, which ResourceFinder doesn't do.
|
|
|
|
* There are things we are missing because we don't look at font
|
|
metrics. The code from TextBuilder (work) has almost everything in
|
|
it that is required. Once we have knowledge of character widths, we
|
|
can support quadding and multiline text fields (/Ff 4096), and we
|
|
can potentially squeeze text to fit into a field. For multiline,
|
|
first squeeze vertically down to the font height, then squeeze
|
|
horizontally with Tz. For single line, squeeze horizontally with Tz.
|
|
If we use Tz, issue a warning.
|
|
|
|
* When mapping characters to widths, we will need to care about
|
|
character encoding. For built-in fonts, we can create a map from
|
|
Unicode code point to width and then go from the font's encoding to
|
|
unicode to the width. See misc/character-encoding/ (not on github)
|
|
and font metric information for the 14 standard fonts in my local
|
|
pdf-spec directory.
|
|
|
|
* Once we know about character widths, we can correctly support
|
|
auto-sized variable text fields (0 Tf). If this is fixed, search for
|
|
"auto-sized" in cli.rst.
|
|
|
|
Fuzz Errors
|
|
===========
|
|
|
|
* https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=<N>
|
|
|
|
* Ignoring these:
|
|
* Out of memory in dct: 35001, 32516
|
|
|
|
External Libraries
|
|
==================
|
|
|
|
Current state (10.0.2):
|
|
|
|
* qpdf/external-libs repository builds external-libs on a schedule.
|
|
It detects and downloads the latest versions of zlib, jpeg, and
|
|
openssl and creates source and binary distribution zip files in an
|
|
artifact called "distribution".
|
|
|
|
* Releases in qpdf/external-libs are made manually. They contain
|
|
qpdf-external-libs-{bin,src}.zip.
|
|
|
|
* The qpdf build finds the latest non-prerelease release and downloads
|
|
the qpdf-external-libs-*.zip files from the releases in the setup
|
|
stage.
|
|
|
|
* To upgrade to a new version of external-libs, create a new release
|
|
of qpdf/external-libs (see README-maintainer in external-libs) from
|
|
the distribution artifact of the most recent successful build after
|
|
ensuring that it works.
|
|
|
|
Desired state:
|
|
|
|
* The qpdf/external-libs repository should create release candidates.
|
|
Ideally, every scheduled run would make its zip files available. A
|
|
personal access token with actions:read scope for the
|
|
qpdf/external-libs repository is required to download the artifact
|
|
from an action run, and qpdf/qpdf's secrets.GITHUB_TOKEN doesn't
|
|
have this access. We could create a service account for this
|
|
purpose. As an alternative, we could have a draft release in
|
|
qpdf/external-libs that the qpdf/external-libs build could update
|
|
with each candidate. It may also be possible to solve this by
|
|
developing a simple GitHub app.
|
|
|
|
* Scheduled runs of the qpdf build in the qpdf/qpdf repository (not a
|
|
fork or pull request) could download external-libs from the release
|
|
candidate area instead of the latest stable release. Pushes to the
|
|
build branch should still use the latest release so it always
|
|
matches the main branch.
|
|
|
|
* Periodically, we would create a release of external-libs from the
|
|
release candidate zip files. This could be done safely because we
|
|
know the latest qpdf works with it. This could be done at least
|
|
before every release of qpdf, but potentially it could be done at
|
|
other times, such as when a new dependency version is available or
|
|
after some period of time.
|
|
|
|
Other notes:
|
|
|
|
* The external-libs branch in qpdf/qpdf was never documented. We might
|
|
be able to get away with deleting it.
|
|
|
|
* See README-maintainer in qpdf/external-libs for information on
|
|
creating a release. This could be at least partially scripted in a
|
|
way that works for the qpdf/qpdf repository as well since they are
|
|
very similar.
|
|
|
|
|
|
ABI Changes
|
|
===========
|
|
|
|
This is a list of changes to make next time there is an ABI change.
|
|
Comments appear in the code prefixed by "ABI". Always Search for ABI
|
|
in source and header files to find items not listed here. Also search
|
|
for "[[deprecated" to find deprecated APIs that can be removed.
|
|
|
|
|
|
Page splitting/merging
|
|
======================
|
|
|
|
* Update page splitting and merging to handle document-level
|
|
constructs with page impact such as interactive forms and article
|
|
threading. Check keys in the document catalog for others, such as
|
|
outlines, page labels, thumbnails, and zones. For threads,
|
|
Subramanyam provided a test file; see ../misc/article-threads.pdf.
|
|
Email Q-Count: 431864 from 2009-11-03.
|
|
|
|
* bookmarks (outlines) 12.3.3
|
|
* support bookmarks when merging
|
|
* prune bookmarks that don't point to a surviving page when merging
|
|
or splitting
|
|
* make sure conflicting named destinations work possibly test by
|
|
including the same file by two paths in a merge
|
|
* see also comments in issue 343
|
|
|
|
Note: original implementation of bookmark preservation for split
|
|
pages caused a very high performance hit. The problem was
|
|
introduced in 313ba081265f69ac9a0324f9fe87087c72918191 and reverted
|
|
in the commit that adds this paragraph. The revert includes marking
|
|
a few tests cases as $td->EXPECT_FAILURE. When properly coded, the
|
|
test cases will need to be adjusted to only include the parts of
|
|
the outlines that are actually copied. The tests in question are
|
|
"split page with outlines". When implementing properly, ensure that
|
|
the performance is not adversely affected by timing split-pages on
|
|
a large file with complex outlines such as the PDF specification.
|
|
|
|
When pruning outlines, keep all outlines in the hierarchy that are
|
|
above an outline for a page we care about. If one of the ancestor
|
|
outlines points to a non-existent page, clear its dest. If an
|
|
outline does not have any children that point to pages in the
|
|
document, just omit it.
|
|
|
|
Possible strategy:
|
|
* resolve all named destinations to explicit destinations
|
|
* concatenate top-level outlines
|
|
* prune outlines whose dests don't point to a valid page
|
|
* recompute all /Count fields
|
|
|
|
Test files
|
|
* page-labels-and-outlines.pdf: old file with both page labels and
|
|
outlines. All destinations are explicit destinations. Each page
|
|
has Potato and a number. All titles are feline names.
|
|
* outlines-with-actions.pdf: mixture of explicit destinations,
|
|
named destinations, goto actions with explicit destinations, and
|
|
goto actions with named destinations; uses /Dests key in names
|
|
dictionary. Each page has Salad and a number. All titles are
|
|
silly words. One destination is an indirect object.
|
|
* outlines-with-old-root-dests.pdf: like outlines-with-actions
|
|
except it uses the PDF-1.1 /Dests dictionary for named
|
|
destinations, and each page has Soup and a number. Also pages are
|
|
numbered with upper-case Roman numerals starting with 0. All
|
|
titles are silly words preceded by a bullet.
|
|
|
|
If outline handling is significantly improved, see
|
|
../misc/bad-outlines/bad-outlines.pdf and email:
|
|
https://mail.google.com/mail/u/0/#search/rfc822msgid%3A02aa01d3d013%249f766990%24de633cb0%24%40mono.hr)
|
|
|
|
* Form fields: should be similar to outlines.
|
|
|
|
Analytics
|
|
=========
|
|
|
|
Consider features that make it easier to detect certain patterns in
|
|
PDF files. The information below could be computed using an external
|
|
program that reads the existing json, but if it's useful enough, we
|
|
could add it directly to the json output.
|
|
|
|
* Add to "pages" in the json:
|
|
* "inheritsresources": bool; whether there are any inherited
|
|
attributes from ancestor page tree nodes
|
|
* "sharedresources": a list of indirect objects that are
|
|
"/Resources" dictionaries or "XObject" resource dictionary subkeys
|
|
of either the page itself or of any form XObject referenced by the
|
|
page.
|
|
|
|
* Add to "objectinfo" in json: "directpagerefcount": the number of
|
|
pages that directly reference this object (i.e., you can find an
|
|
indirect reference to the object in the page dictionary without
|
|
traversing over any indirect objects)
|
|
|
|
General
|
|
=======
|
|
|
|
NOTE: Some items in this list refer to files in my personal home
|
|
directory or that are otherwise not publicly accessible. This includes
|
|
things sent to me by email that are specifically not public. Even so,
|
|
I find it useful to make reference to them in this list.
|
|
|
|
* Look at https://bestpractices.coreinfrastructure.org/en
|
|
|
|
* Get rid of remaining assert() calls from non-test code.
|
|
|
|
* Large file tests fail with linux32 before and after cmake. This was
|
|
first noticed after 10.6.3. I don't think it's worth fixing.
|
|
|
|
* Consider updating the fuzzer with code that exercises
|
|
copyAnnotations, file attachments, and name and number trees. Check
|
|
fuzzer coverage.
|
|
|
|
* Add code for creation of a file attachment annotation. It should
|
|
also be possible to create a widget annotation and a form field.
|
|
Update the pdf-attach-file.cc example with new APIs when ready.
|
|
|
|
* Flattening of form XObjects seems like something that would be
|
|
useful in the library. We are seeing more cases of completely valid
|
|
PDF files with form XObjects that cause problems in other software.
|
|
Flattening of form XObjects could be a useful way to work around
|
|
those issues or to prepare files for additional processing, making
|
|
it possible for users of the qpdf library to not be concerned about
|
|
form XObjects. This could be done recursively; i.e., we could have a
|
|
method to embed a form XObject into whatever contains it, whether
|
|
that is a form XObject or a page. This would require more
|
|
significant interpretation of the content stream. We would need a
|
|
test file in which the placement of the form XObject has to be in
|
|
the right place, e.g., the form XObject partially obscures earlier
|
|
code and is partially obscured by later code. Keys in the resource
|
|
dictionary may need to be changed -- create test cases with lots of
|
|
duplicated/overlapping keys.
|
|
|
|
* Part of closed_file_input_source.cc is disabled on Windows because
|
|
of odd failures. It might be worth investigating so we can fully
|
|
exercise this in the test suite. That said, ClosedFileInputSource
|
|
is exercised elsewhere in qpdf's test suite, so this is not that
|
|
pressing.
|
|
|
|
* If possible, consider adding CCITT3, CCITT4, or any other easy
|
|
filters. For some reference code that we probably can't use but may
|
|
be handy anyway, see
|
|
http://partners.adobe.com/public/developer/ps/sdk/index_archive.html
|
|
|
|
* If possible, support the following types of broken files:
|
|
|
|
- Files that have no whitespace token after "endobj" such that
|
|
endobj collides with the start of the next object
|
|
|
|
- See ../misc/broken-files
|
|
|
|
- See ../misc/bad-files-issue-476. This directory contains a
|
|
snapshot of the google doc and linked PDF files from issue #476.
|
|
Please see the issue for details.
|
|
|
|
* Additional form features
|
|
* set value from CLI? Specify title, and provide way to
|
|
disambiguate, probably by giving objgen of field
|
|
|
|
* Pl_TIFFPredictor is pretty slow.
|
|
|
|
* Support for handling file names with Unicode characters in Windows
|
|
is incomplete. qpdf seems to support them okay from a functionality
|
|
standpoint, and the right thing happens if you pass in UTF-8
|
|
encoded filenames to QPDF library routines in Windows (they are
|
|
converted internally to wchar_t*), but file names are encoded in
|
|
UTF-8 on output, which doesn't produce nice error messages or
|
|
output on Windows in some cases.
|
|
|
|
* If we ever wanted to do anything more with character encoding, see
|
|
../misc/character-encoding/, which includes machine-readable dump
|
|
of table D.2 in the ISO-32000 PDF spec. This shows the mapping
|
|
between Unicode, StandardEncoding, WinAnsiEncoding,
|
|
MacRomanEncoding, and PDFDocEncoding.
|
|
|
|
* Some test cases on bad files fail because qpdf is unable to find
|
|
the root dictionary when it fails to read the trailer. Recovery
|
|
could find the root dictionary and even the info dictionary in
|
|
other ways. In particular, issue-202.pdf can be opened by evince,
|
|
and there's no real reason that qpdf couldn't be made to be able to
|
|
recover that file as well.
|
|
|
|
* Audit every place where qpdf allocates memory to see whether there
|
|
are cases where malicious inputs could cause qpdf to attempt to
|
|
grab very large amounts of memory. Certainly there are cases like
|
|
this, such as if a very highly compressed, very large image stream
|
|
is requested in a buffer. Hopefully normal input to output
|
|
filtering doesn't ever try to do this. QPDFWriter should be checked
|
|
carefully too. See also bugs/private/from-email-663916/
|
|
|
|
* Interactive form modification:
|
|
https://github.com/qpdf/qpdf/issues/213 contains a good discussion
|
|
of some ideas for adding methods to modify annotations and form
|
|
fields if we want to make it easier to support modifications to
|
|
interactive forms. Some of the ideas have been implemented, and
|
|
some of the probably never will be implemented, but it's worth a
|
|
read if there is an intention to work on this. In the issue, search
|
|
for "Regarding write functionality", and read that comment and the
|
|
responses to it.
|
|
|
|
* Look at ~/Q/pdf-collection/forms-from-appian/
|
|
|
|
* When decrypting files with /R=6, hash_V5 is called more than once
|
|
with the same inputs. Caching the results or refactoring to reduce
|
|
the number of identical calls could improve performance for
|
|
workloads that involve processing large numbers of small files.
|
|
|
|
* Consider adding a method to balance the pages tree. It would call
|
|
pushInheritedAttributesToPage, construct a pages tree from scratch,
|
|
and replace the /Pages key of the root dictionary with the new
|
|
tree.
|
|
|
|
* Study what's required to support savable forms that can be saved by
|
|
Adobe Reader. Does this require actually signing the document with
|
|
an Adobe private key? Search for "Digital signatures" in the PDF
|
|
spec, and look at ~/Q/pdf-collection/form-with-full-save.pdf, which
|
|
came from Adobe's example site. See also
|
|
../misc/digital-sign-from-trueroad/. If digital signatures are
|
|
implemented, update the docs on crypto providers, which mention
|
|
that this may happen in the future.
|
|
|
|
* Qpdf does not honor /EFF when adding new file attachments. When it
|
|
encrypts, it never generates streams with explicit crypt filters.
|
|
Prior to 10.2, there was an incorrect attempt to treat /EFF as a
|
|
default value for decrypting file attachment streams, but it is not
|
|
supposed to mean that. Instead, it is intended for conforming
|
|
writers to obey this when adding new attachments. Qpdf is not a
|
|
conforming writer in that respect.
|
|
|
|
* The whole xref handling code in the QPDF object allows the same
|
|
object with more than one generation to coexist, but a lot of logic
|
|
assumes this isn't the case. Anything that creates mappings only
|
|
with the object number and not the generation is this way,
|
|
including most of the interaction between QPDFWriter and QPDF. If
|
|
we wanted to allow the same object with more than one generation to
|
|
coexist, which I'm not sure is allowed, we could fix this by
|
|
changing xref_table. Alternatively, we could detect and disallow
|
|
that case. In fact, it appears that Adobe reader and other PDF
|
|
viewing software silently ignores objects of this type, so this is
|
|
probably not a big deal.
|
|
|
|
* From a suggestion in bug 3152169, consider having an option to
|
|
re-encode inline images with an ASCII encoding.
|
|
|
|
* From github issue 2, provide more in-depth output for examining
|
|
hint stream contents. Consider adding on option to provide a
|
|
human-readable dump of linearization hint tables. This should
|
|
include improving the 'overflow reading bit stream' message as
|
|
reported in issue #2. There are multiple calls to stopOnError in
|
|
the linearization checking code. Ideally, these should not
|
|
terminate checking. It would require re-acquiring an understanding
|
|
of all that code to make the checks more robust. In particular,
|
|
it's hard to look at the code and quickly determine what is a true
|
|
logic error and what could happen because of malformed user input.
|
|
See also ../misc/linearization-errors.
|
|
|
|
* If I ever decide to make appearance stream-generation aware of
|
|
fonts or font metrics, see email from Tobias with Message-ID
|
|
<5C3C9C6C.8000102@thax.hardliners.org> dated 2019-01-14.
|
|
|
|
* Look at places in the code where object traversal is being done and,
|
|
where possible, try to avoid it entirely or at least avoid ever
|
|
traversing the same objects multiple times.
|
|
|
|
----------------------------------------------------------------------
|
|
|
|
HISTORICAL NOTES
|
|
|
|
Performance
|
|
===========
|
|
|
|
As described in https://github.com/qpdf/qpdf/issues/401, there was
|
|
great performance degradation between qpdf 7.1.1 and 9.1.1. Doing a
|
|
bisect between dac65a21fb4fa5f871e31c314280b75adde89a6c and
|
|
release-qpdf-7.1.1, I found several commits that damaged performance.
|
|
I fixed some of them to improve performance by about 70% (as measured
|
|
by saying that old times were 170% of new times). The remaining
|
|
commits that broke performance either can't be correct because they
|
|
would re-introduce an old bug or aren't worth correcting because of
|
|
the high value they offer relative to a relatively low penalty. For
|
|
historical reference, here are the commits. The numbers are the time
|
|
in seconds on the machine I happened to be using of splitting the
|
|
first 100 pages of PDF32000_2008.pdf 20 times and taking an average
|
|
duration.
|
|
|
|
Commits that broke performance:
|
|
|
|
* d0e99f195a987c483bbb6c5449cf39bee34e08a1 -- object description and
|
|
context: 0.39 -> 0.45
|
|
* a01359189b32c60c2d55b039f7aefd6c3ce0ebde (minus 313ba08) -- fix
|
|
dangling references: 0.55 -> 0.6
|
|
* e5f504b6c5dc34337cc0b316b4a7b1fca7e614b1 -- sparse array: 0.6 -> 0.62
|
|
|
|
Other intermediate steps that were previously fixed:
|
|
|
|
* 313ba081265f69ac9a0324f9fe87087c72918191 -- copy outlines into
|
|
split: 0.55 -> 4.0
|
|
* a01359189b32c60c2d55b039f7aefd6c3ce0ebde -- fix dangling references:
|
|
4.0 -> 9.0
|
|
|
|
This commit fixed the awful problem introduced in 313ba081:
|
|
|
|
* a5a016cdd26a8e5c99e5f019bc30d1bdf6c050a2 -- revert outline
|
|
preservation: 9.0 -> 0.6
|
|
|
|
Note that the fix dangling references commit had a much worse impact
|
|
prior to removing the outline preservation, so I also measured its
|
|
impact in isolation.
|
|
|
|
A few important lessons (in README-maintainer)
|
|
|
|
* Indirection through PointerHolder<Members> is expensive, and should
|
|
not be used for things that are created and destroyed frequently
|
|
such as QPDFObjectHandle and QPDFObject.
|
|
* Traversal of objects is expensive and should be avoided where
|
|
possible.
|
|
|
|
Also, it turns out that PointerHolder is more performant than
|
|
std::shared_ptr.
|
|
|
|
Rejected Ideas
|
|
==============
|
|
|
|
* Investigate whether there is a way to automate the memory checker
|
|
tests for Windows.
|
|
|
|
* Provide support in QPDFWriter for writing incremental updates.
|
|
Provide support in qpdf for preserving incremental updates. The
|
|
goal should be that QDF mode should be fully functional for files
|
|
with incremental updates including fix_qdf.
|
|
|
|
Note that there's nothing that says an indirect object in one
|
|
update can't refer to an object that doesn't appear until a later
|
|
update. This means that QPDF has to treat indirect null objects
|
|
differently from how it does now. QPDF drops indirect null objects
|
|
that appear as members of arrays or dictionaries. For arrays, it's
|
|
handled in QPDFWriter where we make indirect nulls direct. This is
|
|
in a single if block, and nothing else in the code cares about it.
|
|
We could just remove that if block and not break anything except a
|
|
few test cases that exercise the current behavior. For
|
|
dictionaries, it's more complicated. In this case,
|
|
QPDF_Dictionary::getKeys() ignores all keys with null values, and
|
|
hasKey() returns false for keys that have null values. We would
|
|
probably want to make QPDF_Dictionary able to handle the special
|
|
case of keys that are indirect nulls and basically never have it
|
|
drop any keys that are indirect objects.
|
|
|
|
If we make a change to have qpdf preserve indirect references to
|
|
null objects, we have to note this in ChangeLog and in the release
|
|
notes since this will change output files. We did this before when
|
|
we stopped flattening scalar references, so this is probably not a
|
|
big deal. We also have to make sure that the testing for this
|
|
handles non-trivial cases of the targets of indirect nulls being
|
|
replaced by real objects in an update. I'm not sure how this plays
|
|
with linearization, if at all. For cases where incremental updates
|
|
are not being preserved as incremental updates and where the data
|
|
is being folded in (as is always the case with qpdf now), none of
|
|
this should make any difference in the actual semantics of the
|
|
files.
|
|
|
|
* The second xref stream for linearized files has to be padded only
|
|
because we need file_size as computed in pass 1 to be accurate. If
|
|
we were not allowing writing to a pipe, we could seek back to the
|
|
beginning and fill in the value of /L in the linearization
|
|
dictionary as an optimization to alleviate the need for this
|
|
padding. Doing so would require us to pad the /L value
|
|
individually and also to save the file descriptor and determine
|
|
whether it's seekable. This is probably not worth bothering with.
|
|
|
|
* Based on an idea suggested by user "Atom Smasher", consider
|
|
providing some mechanism to recover earlier versions of a file
|
|
embedded prior to appended sections.
|
|
|
|
* Consider creating a sanitizer to make it easier for people to send
|
|
broken files. Now that we have json mode, this is probably no
|
|
longer worth doing. Here is the previous idea, possibly implemented
|
|
by making it possible to run the lexer (tokenizer) over a whole
|
|
file. Make it possible to replace all strings in a file lexically
|
|
even on badly broken files. Ideally this should work files that are
|
|
lacking xref, have broken links, duplicated dictionary keys, syntax
|
|
errors, etc., and ideally it should work with encrypted files if
|
|
possible. This should go through the streams and strings and
|
|
replace them with fixed or random characters, preferably, but not
|
|
necessarily, in a manner that works with fonts. One possibility
|
|
would be to detect whether a string contains characters with normal
|
|
encoding, and if so, use 0x41. If the string uses character maps,
|
|
use 0x01. The output should otherwise be unrelated to the input.
|
|
This could be built after the filtering and tokenizer rewrite and
|
|
should be done in a manner that takes advantage of the other
|
|
lexical features. This sanitizer should also clear metadata and
|
|
replace images. If I ever do this, the file from issue #494 would
|
|
be a great one to look at.
|
|
|
|
* Here are some notes about having stream data providers modify
|
|
stream dictionaries. I had wanted to add this functionality to make
|
|
it more efficient to create stream data providers that may
|
|
dynamically decide what kind of filters to use and that may end up
|
|
modifying the dictionary conditionally depending on the original
|
|
stream data. Ultimately I decided not to implement this feature.
|
|
This paragraph describes why.
|
|
|
|
* When writing, the way objects are placed into the queue for
|
|
writing strongly precludes creation of any new indirect objects,
|
|
or even changing which indirect objects are referenced from which
|
|
other objects, because we sometimes write as we are traversing
|
|
and enqueuing objects. For non-linearized files, there is a risk
|
|
that an indirect object that used to be referenced would no
|
|
longer be referenced, and whether it was already written to the
|
|
output file would be based on an accident of where it was
|
|
encountered when traversing the object structure. For linearized
|
|
files, the situation is considerably worse. We decide which
|
|
section of the file to write an object to based on a mapping of
|
|
which objects are used by which other objects. Changing this
|
|
mapping could cause an object to appear in the wrong section, to
|
|
be written even though it is unreferenced, or to be entirely
|
|
omitted since, during linearization, we don't enqueue new objects
|
|
as we traverse for writing.
|
|
|
|
* There are several places in QPDFWriter that query a stream's
|
|
dictionary in order to prepare for writing or to make decisions
|
|
about certain aspects of the writing process. If the stream data
|
|
provider has the chance to modify the dictionary, every piece of
|
|
code that gets stream data would have to be aware of this. This
|
|
would potentially include end user code. For example, any code
|
|
that called getDict() on a stream before installing a stream data
|
|
provider and expected that dictionary to be valid would
|
|
potentially be broken. As implemented right now, you must perform
|
|
any modifications on the dictionary in advance and provided
|
|
/Filter and /DecodeParms at the time you installed the stream
|
|
data provider. This means that some computations would have to be
|
|
done more than once, but for linearized files, stream data
|
|
providers are already called more than once. If the work done by
|
|
a stream data provider is especially expensive, it can implement
|
|
its own cache.
|
|
|
|
The example examples/pdf-custom-filter.cc demonstrates the use of
|
|
custom stream filters. This includes a custom pipeline, a custom
|
|
stream filter, as well as modification of a stream's dictionary to
|
|
include creation of a new stream that is referenced from
|
|
/DecodeParms.
|