2
1
mirror of https://github.com/qpdf/qpdf.git synced 2024-11-16 01:27:07 +00:00
qpdf/TODO

1124 lines
48 KiB
Plaintext
Raw Normal View History

2022-02-25 19:54:25 +00:00
2022-02-05 21:07:17 +00:00
Next
====
2022-02-26 12:39:33 +00:00
In order:
* PointerHolder -> shared_ptr
2022-02-05 21:07:17 +00:00
* code formatting
2022-02-26 12:39:33 +00:00
* cmake
* ABI including --json default is latest
2022-02-05 21:07:17 +00:00
* json v2
2022-02-26 12:39:33 +00:00
Other (do in any order):
2022-02-05 21:07:17 +00:00
Misc
* Get rid of "ugly switch statements" in QUtil.cc -- replace with
static map initializers. (Search for "ugly switch statements" below
as well.)
* Consider exposing get_next_utf8_codepoint in QUtil
* Add QUtil::is_explicit_utf8 that does what QPDF_String::getUTF8Val
does to detect UTF-8 encoded strings per PDF 2.0 spec.
2022-02-25 19:54:25 +00:00
* Add an option --ignore-encryption to ignore encryption information
and treat encrypted files as if they weren't encrypted. This should
make it possible to solve #598 (--show-encryption without a
password). We'll need to make sure we don't try to filter any
streams in this mode. Ideally we should be able to combine this with
--json so we can look at the raw encrypted strings and streams if we
want to. Since providing the password may reveal additional details,
--show-encryption could potentially retry with this option if the
first time doesn't work. Then, with the file open, we can read the
encryption dictionary normally.
2022-02-05 21:07:17 +00:00
Soon: Break ground on "Document-level work"
Code Formatting
===============
It would be good to have automatic code formatting to make the code
more consistent and to make it easier for contributors. We would do a
big commit to bring everything up to spec. Things to keep in mind:
* clang-format looks promising but is a bit of a moving target; need
to see if its output has been stable over the past few releases
2022-02-26 12:39:33 +00:00
since the first one that can produce code the way I like it. I may
have to require a minimum clang-format version.
2022-02-05 21:07:17 +00:00
2022-02-26 12:39:33 +00:00
* Ideas here aim for something similar to rust's defaults but with
adjustments to meet my existing style and preferences:
2022-02-05 21:07:17 +00:00
* 80 columns
* 4-space indent (no tabs)
2022-02-26 12:39:33 +00:00
* Compact braces. While this is a big departure from the past and
will create many changes, I have become accustomed to this style
and use it across my projects in other languages these days.
2022-02-05 21:07:17 +00:00
* No "bin packing" -- if arguments (constructor initializers,
function arguments, etc.) don't fit on one line, do one argument
per line
* With the exception of short lambdas, no block constructs can be
collapsed to a single line.
* Braces are mandatory for all control constructs (no if, while,
etc. without braces)
* Space after control constructs
* Try to get emacs c-style to match as closely as possible
* Consider blame.ignoreRevsFile if it seems to help
* See also https://bestpractices.coreinfrastructure.org/en
* QTC::TC first two arguments have to be lexically on one line. If the
code formatter breaks this, some QTC calls may have to be surrounded by
// clang-format off
// clang-format on
or qtest may have to be made more flexible unless the formatter has
some rules about some places where lines shouldn't be broken.
* auto_* files from generate_auto_job should be exempt from
formatting.
Ideally it should be possible to run formatting in CI so that pull
requests have to be properly formatting, but if not, there needs to be
a `make format` similar to `make spell` that I could apply after
merging contributions and from time to time.
A .clang-format file can be created at the top of the repository.
2022-02-04 18:08:19 +00:00
Output JSON v2
==============
2022-02-25 19:54:25 +00:00
Output JSON v2 will contain enough information to completely recreate
a PDF file. In other words, qpdf will have full, bidirectional,
lossless json serialization/deserialization of PDF.
If this is done, update --json option in cli.rst to mention v2. Also
update QPDFJob::Config::json and of course other parts of the docs
(json.rst).
2022-02-25 19:54:25 +00:00
You can't create a PDF from v1 json because
2022-02-25 19:54:25 +00:00
* The PDF version header is not recorded
* Strings cannot be unambiguously encoded/decoded
* Can't tell string from name from indirect object
* Strings are treated as PDF doc encoding and output as UTF-8, which
doesn't work since multiple PDF doc code points are undefined
* There is no representation of stream data
* You can't tell a stream from a dictionary except by looking in both
"object" and "objectinfo". Fix this, and then remove "objectinfo".
2022-02-25 19:54:25 +00:00
Additionally, using "n n R" as a key in "objects" and "objectinfo"
messes up searching for things.
For json v2:
* Make sure it is possible to serialize and deserializes a PDF to JSON
2022-02-26 12:39:33 +00:00
without loading the whole thing into memory.
* As with a regular PDF, we can load everything into memory at once
except stream data.
* I think we can do this by having the concept of generated values,
which we can make just be strings. We would have a JSON subclass
whose value is a lambda that gets called to generate output. When
we construct the JSON the stream values would be lambda functions
that generate the stream data.
* When we parse the file, we'll have to have a way for the parser to
know that it should create a lambda that reads the data from the
file. I think this means we want something that parses JSON from
an input source. It would have to keep track of the offset and
length of a value from the input source and have a (probably a
lambda that it can call with a path) that would indicate whether
to store the value or whether to create a lambda that retrieves
it. We would have to keep a std::shared_ptr<InputSource> around.
* Add json to the large file tests.
2022-02-25 19:54:25 +00:00
* Resolve differences between information shown in the json format vs.
information shown with options like --check, --list-attachments,
etc. The json format should be able to completely replace things
2022-02-25 19:54:25 +00:00
that write to stdout. Be sure getAllPages() and other top-level
convenience routines are there so people don't need to parse the
pages tree themselves. For many workflows, it should be possible for
someone to work in the json file based on json metadata rather than
calling the QPDF API. (Of course, you still need the QPDF API for
higher level helper objects.)
* Consider using camelCase in multi-word key names to be consistent
with job JSON and with how JSON is often represented in languages
2022-02-25 19:54:25 +00:00
that use it more natively.
* Consider changing the contract to allow fields to be absent even
when present in the schema. It's reasonable for people to check for
presence of a key. Most languages make this easy to do.
2022-02-25 19:54:25 +00:00
* If we allow --json to be mixed with --ignore-encryption, we must
emphasize that the resulting json can't be turned back into a valid
PDF.
Most things that are informational can stay the same. We will have to
2022-02-25 19:54:25 +00:00
go through every item to decide for sure, especially when camelCase is
taken into consideration.
New APIs:
2022-02-25 19:54:25 +00:00
QPDFObjectHandle::parseJSON(QPDF* context, JSON);
QPDFObjectHandle::parseJSON(QPDF* context, std::string const&);
operator ""_qpdf_json
C API to create a QPDFObjectHandle from a json string
2022-02-25 19:54:25 +00:00
JSON::parseFile
QPDF::parseJSON(JSON) (like parseFile, etc. -- deserializes json)
QPDF::updateFromJSON(JSON)
2022-02-25 19:54:25 +00:00
CLI: --infile-is-json -- indicate that the input is a qpdf json file
rather than a PDF file
CLI: --update-from-json=file.json
2022-02-25 19:54:25 +00:00
Have a "qpdf" key in the output that contains "jsonVersion",
"pdfVersion", and "objects". This replaces the "objects" field at the
top level. "objects" and "objectinfo" disappear from the top-level.
".version" and ".qpdf.jsonVersion" will match. The input to parseJSON
and updateFromJSON will have to have the "qpdf" key in it. All other
keys are ignored.
2022-02-25 19:54:25 +00:00
When creating from a JSON file, the JSON must be complete with data
for all streams, a trailer, and a pdfVersion. When updating from a
JSON:
* Any object whose value is null (not "value": null, but just null) is
deleted.
* For any stream that appears without stream data, the stream data is
left alone.
* Otherwise, the object from the JSON completely replaces the input
object. No dictionary merges or anything like that are performed.
It will call replaceObject.
Within .qpdf.objects, the key is "obj:o,g" or "obj:trailer", and the
value is a dictionary with exactly one of "value" or "stream" as its
single key.
For non-streams:
{
"obj:o,g": {
"value": ...
}
}
For streams:
"obj:o,g": {
"stream": {
"dict": { ... stream dictionary ... },
"filterable": bool,
"raw": "base64-encoded raw data",
"filtered": "base64-encoded filtered data"
}
}
}
2022-02-25 19:54:25 +00:00
Wherever a PDF object appears in the JSON output, including "value"
and "stream"."dict" above as well as other places where they might
appear, objects are represented as follows:
* Arrays, dictionaries, booleans, nulls, integers, and real numbers
with no more than six decimal places are represented as their native
JSON type.
* Real numbers with more than six decimal places are represented as
"r:{real-value}".
* Names: "/Name" -- internal/canonical representation (e.g.
"/Text/Plain", not #xx quoted)
* Indirect objects: "n n R"
* Strings: one of
"s:json string treated as Unicode"
"b:json string treated as bytes; character > \u00ff is an error"
"e:base64-encoded bytes"
Test cases: these are the same:
* "b:\u00c8\u0080", "s:π", "s:\u03c0", and "e:z4A="
* "b:\u00d8\u003e\u00dd\u0054", "s:🥔", "s:\ud83e\udd54", and "e:8J+llA=="
When creating output from a string:
* If the string is explicitly unicode (UTF-8 or UTF-16), encode as
"s:" without the leading U+FEFF
* Else if the string can be bidirectionally mapped between pdf-doc and
unicode, transcode to unicode and encode as "s:"
* Else if the string would be decoded as binary, encode as "e:"
* Else encode as "b:"
When reading a string, any string that doesn't follow the above rules
is an error. This includes "r:" strings not paresable as a real
number, "/Name" strings containing a NUL character, "s:" or "b:"
strings that are not valid JSON strings, "b:" strings containing
character values > 0xff, or "e:" values that are not valid base64.
Once the string is read in, if the "s:" string can be bidirectionally
mapped between pdf-doc and unicode, store as PDFDoc. Otherwise store
as UTF-16BE. "b:" strings are stored as bytes, and "e:" are decoded
and stored as bytes.
Implementing this will require some refactoring of things between
QUtil and QPDF_String, plus we will need to implement a base64
encoder/decoder.
This enables a workflow like this:
* qpdf --json=latest infile.pdf > pdf.json
* modify pdf.json
* qpdf infile.pdf --update-from=pdf.json out.pdf
or
* qpdf --json=latest --json-stream-data=raw|filtered infile.pdf > pdf.json
* modify pdf.json
* qpdf pdf.json --infile-is-json out.pdf
Notes about streams and stream data:
* Always include "dict". "/Length" is removed from the stream
dictionary.
2022-02-25 19:54:25 +00:00
* Add new flag --json-stream-data={raw,filtered,none}. At most one of
"raw" and "filtered" will appear for each stream. If "filtered"
appears, "/Filter" and "/DecodeParms" are removed from the stream
dictionary. This makes the stream data and dictionary match for when
the file is read back in.
* Always include "filterable" regardless of value of
--json-stream-data. The value of filterable is influenced by
--decode-level, which is already in parameters.
* Add to parameters: value of json-stream-data, default is none
2022-02-25 19:54:25 +00:00
* If --json-stream-data=none, omit stream data entirely
2022-02-25 19:54:25 +00:00
* If --json-stream-data=raw, include raw stream data as base64. Show
the data even for unfiltered streams in "raw".
2022-02-25 19:54:25 +00:00
* If --json-stream-data=filtered, include the base64-encoded filtered
stream data if we can and should decode it based on decode-level.
Otherwise, include the base64-encoded raw data. See if we can honor
--normalize-content. If a stream appears unfiltered in the input,
still show it as filtered. Remove /DecodeParms and /Filter if
filtering.
Note that --json-stream-data=filtered is different from
--filtered-stream-data in that --filtered-stream-data implies
--decode-level=all while --json-stream-data=filtered does not. Make
sure this is mentioned in the help for both options.
QPDFJob
=======
Here are some ideas for QPDFJob that didn't make it into 10.6. Not all
of these are necessarily good -- just things to consider.
* replace mode: --replace-object, --replace-stream-raw,
--replace-stream-filtered
* update first paragraph of QPDF JSON in the manual to mention this
* object numbers are not preserved by write, so object ID lookup
has to be done separately for each invocation
* you don't have to specify length for streams
* you only have to specify filtering for streams if providing raw data
* Allow users to supply a custom progress reporter for QPDFJob
* Better interoperability with json output:
* Make sure all the things that print stuff to stdout have json
equivalents (check, showLinearizationData, etc.)
* There should be a way to get json output other than having it
print to stdout. It should be multi-language friendly and allow
for large amounts of data, such as providing a callback that qpdf
can write to (like a pipeline)
* See also JSON v2
* How do we chain jobs? The idea would be that the input and/or output
of a QPDFJob could be a QPDF object rather than a file. For input,
it's pretty easy. For output, none of the output-specific options
(encrypt, compress-streams, objects-streams, etc.) would have any
affect, so we would have to treat this like inspect for error
checking. The QPDF object in the state where it's ready to be sent
off to QPDFWriter would be used as the input to the next QPDFJob.
For the job json, I think we can have the output be an identifier
that can be used as the input for another QPDFJob. For a json file,
we could the top level detect if it's an array with the convention
that exactly one has an output, or we could have a subkey with other
job definitions or something. Ideally, any input
(copy-attachments-from, pages, etc.) could use a QPDF object. It
wouldn't surprise me if this exposes bugs in qpdf around foreign
streams as this has been a relatively fragile area before.
2021-06-06 22:54:37 +00:00
Documentation
=============
2021-12-13 18:55:19 +00:00
* Do a full pass through the documentation.
* Make sure `qpdf` is consistent. Use QPDF when just referring to
the package.
* Make sure markup is consistent
* Autogenerate where possible
* Consider which parts might be good candidates for moving to the
wiki.
* Commit 'Manual - enable line wrapping in table cells' from
Mon Jan 17 12:22:35 2022 +0000 enables table cell wrapping. See if
this can be incorporated directly into sphinx_rtd_theme and the
workaround can be removed.
* When possible, update the debian package to include docs again. See
https://bugs.debian.org/1004159 for details.
2021-02-23 13:27:26 +00:00
Document-level work
===================
2021-12-21 22:26:05 +00:00
* Ideas here may by superseded by #593.
2021-02-26 20:59:46 +00:00
* QPDFPageCopier -- object for moving pages around within files or
between files and performing various transformations. Reread/rewrite
_page-selection in the manual if needed.
2021-02-26 20:59:46 +00:00
* Handle all the stuff of pages and split-pages
* Do n-up, booklet, collation
* Look through cli and see what else...flatten-*?
2021-02-26 20:59:46 +00:00
* See comments in QPDFPageDocumentHelper.hh for addPage -- search
for "a future version".
* Make it efficient for bulk operations
* Make certain doc-level features selectable
* qpdf.cc should do all its page operations, including
overlay/underlay, splitting, and merging, using this
* There should also be example code
* After doc-level checks are in, call --check on the output files in
the "Copy Annotations" tests.
* Document-level checks. For example, for forms, make sure all form
fields point to an annotation on exactly one page as well as that
all widget annotations are associated with a form field. Hook this
into QPDFPageCopier as well as the doc helpers. Make sure it is
called from --check.
2021-02-26 20:59:46 +00:00
2021-02-23 13:27:26 +00:00
* See also issues tagged with "pages"
* Add flags to CLI to select which document-level options to
preserve or not preserve. We will probably need a pair of mutually
exclusive, repeatable options with a way to specify all, none, only
{x,y}, or all but {x,y}.
* If a page contains a reference a file attachment annotation, when
that page is copied, if the file attachment appears in the top-level
EmbeddedFiles tree, that entry should be preserved in the
destination file. Otherwise, we probably will require the use of
--copy-attachments-from to preserve these. What will the strategy be
for deduplicating in the automatic case?
Text Appearance Streams
=======================
This is a list of known issues with text appearance streams and things
we might do about it.
* For variable text, the spec says to pull any resources from /DR that
are referenced in /DA but if the resource dictionary already has
that resource, just use the one that's there. The current code looks
only for /Tf and adds it if needed. We might want to instead merge
/DR with resources and then remove anything that's unreferenced. We
have all the code required for that in ResourceFinder except
TfFinder also gets the font size, which ResourceFinder doesn't do.
* There are things we are missing because we don't look at font
metrics. The code from TextBuilder (work) has almost everything in
it that is required. Once we have knowledge of character widths, we
can support quadding and multiline text fields (/Ff 4096), and we
can potentially squeeze text to fit into a field. For multiline,
first squeeze vertically down to the font height, then squeeze
horizontally with Tz. For single line, squeeze horizontally with Tz.
If we use Tz, issue a warning.
* When mapping characters to widths, we will need to care about
character encoding. For built-in fonts, we can create a map from
Unicode code point to width and then go from the font's encoding to
unicode to the width. Get rid of "ugly switch statements" in
QUtil.cc and replace with static map initializers. See
misc/character-encoding/ (not on github) and font metric information
for the 14 standard fonts in my local pdf-spec directory.
* Once we know about character widths, we can correctly support
auto-sized variable text fields (0 Tf). If this is fixed, search for
"auto-sized" in cli.rst.
2019-06-23 17:51:09 +00:00
Fuzz Errors
===========
* https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=<N>
* Ignoring these:
2021-11-04 17:56:47 +00:00
* Out of memory in dct: 35001, 32516
2019-06-23 01:52:50 +00:00
External Libraries
==================
Current state (10.0.2):
* qpdf/external-libs repository builds external-libs on a schedule.
It detects and downloads the latest versions of zlib, jpeg, and
openssl and creates source and binary distribution zip files in an
artifact called "distribution".
* Releases in qpdf/external-libs are made manually. They contain
qpdf-external-libs-{bin,src}.zip.
* The qpdf build finds the latest non-prerelease release and downloads
the qpdf-external-libs-*.zip files from the releases in the setup
stage.
* To upgrade to a new version of external-libs, create a new release
of qpdf/external-libs (see README-maintainer in external-libs) from
the distribution artifact of the most recent successful build after
ensuring that it works.
Desired state:
* The qpdf/external-libs repository should create release candidates.
2020-10-26 15:51:33 +00:00
Ideally, every scheduled run would make its zip files available. A
personal access token with actions:read scope for the
qpdf/external-libs repository is required to download the artifact
from an action run, and qpdf/qpdf's secrets.GITHUB_TOKEN doesn't
2020-10-26 15:51:33 +00:00
have this access. We could create a service account for this
purpose. As an alternative, we could have a draft release in
qpdf/external-libs that the qpdf/external-libs build could update
with each candidate. It may also be possible to solve this by
developing a simple GitHub app.
* Scheduled runs of the qpdf build in the qpdf/qpdf repository (not a
fork or pull request) could download external-libs from the release
candidate area instead of the latest stable release. Pushes to the
build branch should still use the latest release so it always
matches the main branch.
* Periodically, we would create a release of external-libs from the
release candidate zip files. This could be done safely because we
know the latest qpdf works with it. This could be done at least
2020-10-26 23:42:23 +00:00
before every release of qpdf, but potentially it could be done at
other times, such as when a new dependency version is available or
after some period of time.
Other notes:
* The external-libs branch in qpdf/qpdf was never documented. We might
be able to get away with deleting it.
* See README-maintainer in qpdf/external-libs for information on
creating a release. This could be at least partially scripted in a
way that works for the qpdf/qpdf repository as well since they are
very similar.
2020-04-06 15:13:09 +00:00
2022-02-04 23:37:45 +00:00
PointerHolder to std::shared_ptr
================================
To perform update:
Cherry-pick pointerholder branch commit
2022-02-04 23:37:45 +00:00
Upgrade just the library. This is not necessary, but it's an added
check that the compatibility code works since it will show that tests,
examples, and CLI will work properly with the upgraded APIs, which
provides some assurance that other people will have a smooth time with
their code.
2022-02-04 23:37:45 +00:00
patrepl s/PointerHolder/std::shared_ptr/g {include,libqpdf}/qpdf/*.hh
patrepl s/PointerHolder/std::shared_ptr/g libqpdf/*.cc
patrepl s/make_pointer_holder/std::make_shared/g libqpdf/*.cc
patrepl s/make_array_pointer_holder/QUtil::make_shared_array/g libqpdf/*.cc
patrepl s,qpdf/std::shared_ptr,qpdf/PointerHolder, **/*.cc **/*.hh
git restore include/qpdf/PointerHolder.hh
cleanpatch
2022-02-04 23:37:45 +00:00
Increase to POINTERHOLDER_TRANSITION=3
2022-02-04 23:37:45 +00:00
make build_libqpdf -- no errors
2022-02-04 23:37:45 +00:00
Drop back to POINTERHOLDER_TRANSITION=2
2022-02-04 23:37:45 +00:00
make check -- everything passes
Then upgrade everything else. It would work to just start here.
Increase to POINTERHOLDER_TRANSITION=3
patrepl s/PointerHolder/std::shared_ptr/g **/*.cc **/*.hh
patrepl s/make_pointer_holder/std::make_shared/g **/*.cc
patrepl s/make_array_pointer_holder/QUtil::make_shared_array/g **/*.cc
patrepl s,qpdf/std::shared_ptr,qpdf/PointerHolder, **/*.cc **/*.hh
git restore include/qpdf/PointerHolder.hh
git restore libtests/pointer_holder.cc
cleanpatch
Remove all references to PointerHolder.hh from everything except
public headers and pointer_holder.cc.
make check -- everything passes
Increase to POINTERHOLDER_TRANSITION=4
Do a clean build and make check -- everything passes
Final steps:
* Change to POINTERHOLDER_TRANSITION=4 in autoconf.mk.in.
* Check code formatting
* std::shared_ptr<Members> m can be replaced with
std::shared_ptr<Members> m_ph and Members* m if performance is critical
* Could try Members indirection with Members* for QPDFObjectHandle
When done:
* Update the smart-pointers section of the manual in design.rst
* Update comments in PointerHolder.hh
2022-02-04 23:37:45 +00:00
PointerHolder in public API:
PointerHolder<Buffer> Pl_Buffer::getBufferSharedPointer();
PointerHolder<Buffer> QPDFWriter::getBufferSharedPointer();
2022-02-04 23:37:45 +00:00
QUtil::read_file_into_memory(
char const*, PointerHolder<char>&, unsigned long&)
QPDFObjectHandle::addContentTokenFilter(
PointerHolder<QPDFObjectHandle::TokenFilter>)
QPDFObjectHandle::addTokenFilter(
PointerHolder<QPDFObjectHandle::TokenFilter>)
QPDFObjectHandle::newStream(
QPDF*, PointerHolder<Buffer>)
QPDFObjectHandle::parse(
PointerHolder<InputSource>, std::string const&,
QPDFTokenizer&, bool&, QPDFObjectHandle::StringDecrypter*, QPDF*)
QPDFObjectHandle::replaceStreamData(
PointerHolder<Buffer>, QPDFObjectHandle const&,
QPDFObjectHandle const&)
QPDFObjectHandle::replaceStreamData(
PointerHolder<QPDFObjectHandle::StreamDataProvider>,
QPDFObjectHandle const&, QPDFObjectHandle const&)
QPDFTokenizer::expectInlineImage(
PointerHolder<InputSource>)
QPDFTokenizer::readToken(
PointerHolder<InputSource>, std::string const&,
bool, unsigned long)
QPDF::processInputSource(
PointerHolder<InputSource>, char const*)
QPDFWriter::registerProgressReporter(
PointerHolder<QPDFWriter::ProgressReporter>)
QPDFEFStreamObjectHelper::createEFStream(
QPDF&, PointerHolder<Buffer>)
QPDFPageObjectHelper::addContentTokenFilter(
PointerHolder<QPDFObjectHandle::TokenFilter>)
2019-11-09 16:15:09 +00:00
ABI Changes
===========
This is a list of changes to make next time there is an ABI change.
Comments appear in the code prefixed by "ABI"
2022-01-28 12:29:47 +00:00
* Search for ABI to find items not listed here.
* Switch default --json to latest
2022-02-04 23:37:45 +00:00
* See PointerHolder to std::shared_ptr above.
2021-12-29 15:38:23 +00:00
* See where anonymous namespaces can be used to keep things private to
a source file. Search for `(class|struct)` in **/*.cc.
2022-01-28 12:29:47 +00:00
* See if we can use constructor delegation instead of init() in
classes with overloaded constructors.
* Merge two versions of QPDFObjectHandle::makeDirect per comment
* After removing legacy QPDFNameTreeObjectHelper and
QPDFNumberTreeObjectHelper constructors, NNTreeImpl can switch to
having a QPDF reference and assume that the reference is always
valid.
2021-02-07 12:51:15 +00:00
* Use `= delete` and `= default` for constructors and destructors
where possible
* Having QPDFObjectHandle setters return Class& to allow for
use of fluent interfaces. This includes array and dictionary
mutators.
newDictionary().replaceKey("/X", "1"_qpdf),replaceKey("/Y", "(asdf)"_qpdf);
* Add replaceKeyAndGet, appendItemAndGet, setArrayItemAndGet,
insertItemAndGet that return the new item so you can say
auto oh = dict.replaceKeyAndGet("/Key", QPDFObjectHandle::newSomething());
* Add getOrInsertKey("/X", oh) that returns the existing value or adds
oh as the new value and returns it.
* Add default values to the getters, like getIntValue(default_value).
If a default value is passed in, you never get a type warning.
* Added QPDFObjectHandle::ParserCallbacks::handleWarning but had to
revert because it was not binary compatible. Consider re-adding. The
commit that added this comment includes the reverting of the change.
The previous commit removes the code that was calling and using
handleWarning.
* Make it easier to deal with objects that should be indirect. Search
for makeIndirectObject in the code to find patterns. For example, it
would be nice to have a one-liner for the case of one or all
dictionary values or array items being replaced with an indirect
objects if direct. Maybe we want a version of copyForeignObject that
takes the foreign qpdf and converts the source object to indirect
before copying, though maybe we don't because it could cause
multiple copies to be made...usually it's better to handle that
explicitly.
* Deal with weak cryptographic algorithms:
* Github issue #576
* Add something to QPDFWriter that you must call in order to allow
creation of files with insecure crypto. Maybe
QPDFWriter::allowWeakCrypto. Call this when --allow-weak-crypto is
passed and probably also when copying encryption by default from
an input file. There should be some API change so that, when
people recompile with qpdf 11, their code won't suddenly stop
working. Getting this right will take careful consideration of the
developer and user experience. We don't want to create a situation
where exactly the same code fails to work in 11 but worked on 10.
See #576 for latest notes.
* Change deterministic id to use something other than MD5 but allow
the old way for compatibility -- maybe rename the method to force
the developer to make a choice
* Find other uses of MD5 and find the ones that are discretionary,
if any
* Have QPDFWriter raise an exception if it's about to write using
weak crypto and hasn't been given permission
* Search for --allow-weak-crypto in the manual and in qpdf.cc's help
information
* Update the ref.weak-crypto section of the manual
Page splitting/merging
======================
* Update page splitting and merging to handle document-level
constructs with page impact such as interactive forms and article
threading. Check keys in the document catalog for others, such as
outlines, page labels, thumbnails, and zones. For threads,
Subramanyam provided a test file; see ../misc/article-threads.pdf.
Email Q-Count: 431864 from 2009-11-03.
2019-01-01 13:35:11 +00:00
* bookmarks (outlines) 12.3.3
* support bookmarks when merging
* prune bookmarks that don't point to a surviving page when merging
or splitting
* make sure conflicting named destinations work possibly test by
including the same file by two paths in a merge
2019-07-13 14:38:02 +00:00
* see also comments in issue 343
2019-01-01 13:35:11 +00:00
Note: original implementation of bookmark preservation for split
pages caused a very high performance hit. The problem was
introduced in 313ba081265f69ac9a0324f9fe87087c72918191 and reverted
in the commit that adds this paragraph. The revert includes marking
a few tests cases as $td->EXPECT_FAILURE. When properly coded, the
test cases will need to be adjusted to only include the parts of
the outlines that are actually copied. The tests in question are
"split page with outlines". When implementing properly, ensure that
the performance is not adversely affected by timing split-pages on
a large file with complex outlines such as the PDF specification.
2019-01-01 13:35:11 +00:00
When pruning outlines, keep all outlines in the hierarchy that are
above an outline for a page we care about. If one of the ancestor
outlines points to a non-existent page, clear its dest. If an
outline does not have any children that point to pages in the
document, just omit it.
Possible strategy:
* resolve all named destinations to explicit destinations
* concatenate top-level outlines
* prune outlines whose dests don't point to a valid page
* recompute all /Count fields
Test files
* page-labels-and-outlines.pdf: old file with both page labels and
outlines. All destinations are explicit destinations. Each page
has Potato and a number. All titles are feline names.
* outlines-with-actions.pdf: mixture of explicit destinations,
named destinations, goto actions with explicit destinations, and
goto actions with named destinations; uses /Dests key in names
dictionary. Each page has Salad and a number. All titles are
silly words. One destination is an indirect object.
* outlines-with-old-root-dests.pdf: like outlines-with-actions
except it uses the PDF-1.1 /Dests dictionary for named
destinations, and each page has Soup and a number. Also pages are
numbered with upper-case Roman numerals starting with 0. All
titles are silly words preceded by a bullet.
2019-06-23 01:52:50 +00:00
If outline handling is significantly improved, see
../misc/bad-outlines/bad-outlines.pdf and email:
https://mail.google.com/mail/u/0/#search/rfc822msgid%3A02aa01d3d013%249f766990%24de633cb0%24%40mono.hr)
2019-01-01 13:35:11 +00:00
* Form fields: should be similar to outlines.
2020-04-03 15:30:18 +00:00
Analytics
=========
Consider features that make it easier to detect certain patterns in
PDF files. The information below could be computed using an external
program that reads the existing json, but if it's useful enough, we
could add it directly to the json output.
* Add to "pages" in the json:
* "inheritsresources": bool; whether there are any inherited
attributes from ancestor page tree nodes
* "sharedresources": a list of indirect objects that are
"/Resources" dictionaries or "XObject" resource dictionary subkeys
of either the page itself or of any form XObject referenced by the
page.
* Add to "objectinfo" in json: "directpagerefcount": the number of
pages that directly reference this object (i.e., you can find an
indirect reference to the object in the page dictionary without
traversing over any indirect objects)
General
=======
2017-11-27 21:49:37 +00:00
NOTE: Some items in this list refer to files in my personal home
directory or that are otherwise not publicly accessible. This includes
things sent to me by email that are specifically not public. Even so,
2020-12-25 19:33:25 +00:00
I find it useful to make reference to them in this list.
* Get rid of remaining assert() calls from non-test code.
* Consider updating the fuzzer with code that exercises
copyAnnotations, file attachments, and name and number trees. Check
fuzzer coverage.
2021-02-18 01:14:04 +00:00
* Add code for creation of a file attachment annotation. It should
also be possible to create a widget annotation and a form field.
Update the pdf-attach-file.cc example with new APIs when ready.
2020-12-25 19:33:25 +00:00
* Flattening of form XObjects seems like something that would be
useful in the library. We are seeing more cases of completely valid
PDF files with form XObjects that cause problems in other software.
Flattening of form XObjects could be a useful way to work around
those issues or to prepare files for additional processing, making
it possible for users of the qpdf library to not be concerned about
form XObjects. This could be done recursively; i.e., we could have a
method to embed a form XObject into whatever contains it, whether
that is a form XObject or a page. This would require more
significant interpretation of the content stream. We would need a
test file in which the placement of the form XObject has to be in
the right place, e.g., the form XObject partially obscures earlier
code and is partially obscured by later code. Keys in the resource
dictionary may need to be changed -- create test cases with lots of
duplicated/overlapping keys.
2017-11-27 21:49:37 +00:00
* Part of closed_file_input_source.cc is disabled on Windows because
of odd failures. It might be worth investigating so we can fully
exercise this in the test suite. That said, ClosedFileInputSource
is exercised elsewhere in qpdf's test suite, so this is not that
pressing.
2019-01-07 04:46:32 +00:00
* If possible, consider adding CCITT3, CCITT4, or any other easy
filters. For some reference code that we probably can't use but may
be handy anyway, see
http://partners.adobe.com/public/developer/ps/sdk/index_archive.html
* If possible, support the following types of broken files:
- Files that have no whitespace token after "endobj" such that
endobj collides with the start of the next object
- See ../misc/broken-files
- See ../misc/bad-files-issue-476. This directory contains a
snapshot of the google doc and linked PDF files from issue #476.
Please see the issue for details.
2019-01-01 13:35:11 +00:00
* Additional form features
* set value from CLI? Specify title, and provide way to
disambiguate, probably by giving objgen of field
* Pl_TIFFPredictor is pretty slow.
* Support for handling file names with Unicode characters in Windows
is incomplete. qpdf seems to support them okay from a functionality
standpoint, and the right thing happens if you pass in UTF-8
encoded filenames to QPDF library routines in Windows (they are
converted internally to wchar_t*), but file names are encoded in
UTF-8 on output, which doesn't produce nice error messages or
output on Windows in some cases.
2019-01-05 17:35:58 +00:00
* If we ever wanted to do anything more with character encoding, see
../misc/character-encoding/, which includes machine-readable dump
of table D.2 in the ISO-32000 PDF spec. This shows the mapping
between Unicode, StandardEncoding, WinAnsiEncoding,
MacRomanEncoding, and PDFDocEncoding.
* Some test cases on bad files fail because qpdf is unable to find
2018-04-15 22:45:11 +00:00
the root dictionary when it fails to read the trailer. Recovery
could find the root dictionary and even the info dictionary in
other ways. In particular, issue-202.pdf can be opened by evince,
and there's no real reason that qpdf couldn't be made to be able to
recover that file as well.
2017-11-27 21:49:37 +00:00
* Audit every place where qpdf allocates memory to see whether there
are cases where malicious inputs could cause qpdf to attempt to
grab very large amounts of memory. Certainly there are cases like
this, such as if a very highly compressed, very large image stream
is requested in a buffer. Hopefully normal input to output
filtering doesn't ever try to do this. QPDFWriter should be checked
carefully too. See also bugs/private/from-email-663916/
* Interactive form modification:
https://github.com/qpdf/qpdf/issues/213 contains a good discussion
of some ideas for adding methods to modify annotations and form
fields if we want to make it easier to support modifications to
interactive forms. Some of the ideas have been implemented, and
some of the probably never will be implemented, but it's worth a
read if there is an intention to work on this. In the issue, search
for "Regarding write functionality", and read that comment and the
responses to it.
2017-11-27 21:49:37 +00:00
* Look at ~/Q/pdf-collection/forms-from-appian/
2013-07-20 14:17:35 +00:00
* When decrypting files with /R=6, hash_V5 is called more than once
with the same inputs. Caching the results or refactoring to reduce
the number of identical calls could improve performance for
workloads that involve processing large numbers of small files.
2013-06-15 17:36:58 +00:00
* Consider adding a method to balance the pages tree. It would call
pushInheritedAttributesToPage, construct a pages tree from scratch,
and replace the /Pages key of the root dictionary with the new
tree.
2013-03-11 00:32:40 +00:00
* Study what's required to support savable forms that can be saved by
2019-11-09 13:17:54 +00:00
Adobe Reader. Does this require actually signing the document with
an Adobe private key? Search for "Digital signatures" in the PDF
2013-03-11 00:32:40 +00:00
spec, and look at ~/Q/pdf-collection/form-with-full-save.pdf, which
2019-11-09 13:17:54 +00:00
came from Adobe's example site. See also
../misc/digital-sign-from-trueroad/. If digital signatures are
2019-11-10 00:03:56 +00:00
implemented, update the docs on crypto providers, which mention
2019-11-09 13:17:54 +00:00
that this may happen in the future.
2013-03-11 00:32:40 +00:00
* Qpdf does not honor /EFF when adding new file attachments. When it
encrypts, it never generates streams with explicit crypt filters.
Prior to 10.2, there was an incorrect attempt to treat /EFF as a
default value for decrypting file attachment streams, but it is not
2021-02-23 14:23:35 +00:00
supposed to mean that. Instead, it is intended for conforming
writers to obey this when adding new attachments. Qpdf is not a
conforming writer in that respect.
* The whole xref handling code in the QPDF object allows the same
object with more than one generation to coexist, but a lot of logic
assumes this isn't the case. Anything that creates mappings only
with the object number and not the generation is this way,
including most of the interaction between QPDFWriter and QPDF. If
we wanted to allow the same object with more than one generation to
coexist, which I'm not sure is allowed, we could fix this by
changing xref_table. Alternatively, we could detect and disallow
that case. In fact, it appears that Adobe reader and other PDF
viewing software silently ignores objects of this type, so this is
probably not a big deal.
2012-07-29 18:32:54 +00:00
* From a suggestion in bug 3152169, consider having an option to
2019-06-22 16:02:39 +00:00
re-encode inline images with an ASCII encoding.
* From github issue 2, provide more in-depth output for examining
2017-08-07 14:26:02 +00:00
hint stream contents. Consider adding on option to provide a
human-readable dump of linearization hint tables. This should
include improving the 'overflow reading bit stream' message as
reported in issue #2. There are multiple calls to stopOnError in
the linearization checking code. Ideally, these should not
terminate checking. It would require re-acquiring an understanding
of all that code to make the checks more robust. In particular,
it's hard to look at the code and quickly determine what is a true
logic error and what could happen because of malformed user input.
2019-01-12 15:04:14 +00:00
See also ../misc/linearization-errors.
2019-01-21 01:40:38 +00:00
* If I ever decide to make appearance stream-generation aware of
fonts or font metrics, see email from Tobias with Message-ID
<5C3C9C6C.8000102@thax.hardliners.org> dated 2019-01-14.
2021-12-21 22:26:05 +00:00
* Look at places in the code where object traversal is being done and,
where possible, try to avoid it entirely or at least avoid ever
traversing the same objects multiple times.
----------------------------------------------------------------------
HISTORICAL NOTES
Performance
===========
As described in https://github.com/qpdf/qpdf/issues/401, there was
great performance degradation between qpdf 7.1.1 and 9.1.1. Doing a
bisect between dac65a21fb4fa5f871e31c314280b75adde89a6c and
release-qpdf-7.1.1, I found several commits that damaged performance.
I fixed some of them to improve performance by about 70% (as measured
by saying that old times were 170% of new times). The remaining
commits that broke performance either can't be correct because they
would re-introduce an old bug or aren't worth correcting because of
the high value they offer relative to a relatively low penalty. For
historical reference, here are the commits. The numbers are the time
in seconds on the machine I happened to be using of splitting the
first 100 pages of PDF32000_2008.pdf 20 times and taking an average
duration.
Commits that broke performance:
* d0e99f195a987c483bbb6c5449cf39bee34e08a1 -- object description and
context: 0.39 -> 0.45
* a01359189b32c60c2d55b039f7aefd6c3ce0ebde (minus 313ba08) -- fix
dangling references: 0.55 -> 0.6
* e5f504b6c5dc34337cc0b316b4a7b1fca7e614b1 -- sparse array: 0.6 -> 0.62
Other intermediate steps that were previously fixed:
* 313ba081265f69ac9a0324f9fe87087c72918191 -- copy outlines into
split: 0.55 -> 4.0
* a01359189b32c60c2d55b039f7aefd6c3ce0ebde -- fix dangling references:
4.0 -> 9.0
This commit fixed the awful problem introduced in 313ba081:
* a5a016cdd26a8e5c99e5f019bc30d1bdf6c050a2 -- revert outline
preservation: 9.0 -> 0.6
Note that the fix dangling references commit had a much worse impact
prior to removing the outline preservation, so I also measured its
impact in isolation.
A few important lessons (in README-maintainer)
* Indirection through PointerHolder<Members> is expensive, and should
not be used for things that are created and destroyed frequently
such as QPDFObjectHandle and QPDFObject.
* Traversal of objects is expensive and should be avoided where
possible.
Also, it turns out that PointerHolder is more performant than
std::shared_ptr.
Rejected Ideas
==============
* Investigate whether there is a way to automate the memory checker
tests for Windows.
* Consider adding "uninstall" target to makefile. It should only
uninstall what it installed, which means that you must run
uninstall from the version you ran install with. It would only be
supported for the toolchains that support the install target
(libtool).
* Provide support in QPDFWriter for writing incremental updates.
Provide support in qpdf for preserving incremental updates. The
goal should be that QDF mode should be fully functional for files
with incremental updates including fix_qdf.
Note that there's nothing that says an indirect object in one
update can't refer to an object that doesn't appear until a later
update. This means that QPDF has to treat indirect null objects
differently from how it does now. QPDF drops indirect null objects
that appear as members of arrays or dictionaries. For arrays, it's
handled in QPDFWriter where we make indirect nulls direct. This is
in a single if block, and nothing else in the code cares about it.
We could just remove that if block and not break anything except a
few test cases that exercise the current behavior. For
dictionaries, it's more complicated. In this case,
QPDF_Dictionary::getKeys() ignores all keys with null values, and
hasKey() returns false for keys that have null values. We would
probably want to make QPDF_Dictionary able to handle the special
case of keys that are indirect nulls and basically never have it
drop any keys that are indirect objects.
If we make a change to have qpdf preserve indirect references to
null objects, we have to note this in ChangeLog and in the release
notes since this will change output files. We did this before when
we stopped flattening scalar references, so this is probably not a
big deal. We also have to make sure that the testing for this
handles non-trivial cases of the targets of indirect nulls being
replaced by real objects in an update. I'm not sure how this plays
with linearization, if at all. For cases where incremental updates
are not being preserved as incremental updates and where the data
is being folded in (as is always the case with qpdf now), none of
this should make any difference in the actual semantics of the
files.
* The second xref stream for linearized files has to be padded only
because we need file_size as computed in pass 1 to be accurate. If
we were not allowing writing to a pipe, we could seek back to the
beginning and fill in the value of /L in the linearization
dictionary as an optimization to alleviate the need for this
padding. Doing so would require us to pad the /L value
individually and also to save the file descriptor and determine
whether it's seekable. This is probably not worth bothering with.
* Based on an idea suggested by user "Atom Smasher", consider
providing some mechanism to recover earlier versions of a file
embedded prior to appended sections.
* Consider creating a sanitizer to make it easier for people to send
broken files. Now that we have json mode, this is probably no
longer worth doing. Here is the previous idea, possibly implemented
by making it possible to run the lexer (tokenizer) over a whole
file. Make it possible to replace all strings in a file lexically
even on badly broken files. Ideally this should work files that are
lacking xref, have broken links, duplicated dictionary keys, syntax
errors, etc., and ideally it should work with encrypted files if
possible. This should go through the streams and strings and
replace them with fixed or random characters, preferably, but not
necessarily, in a manner that works with fonts. One possibility
would be to detect whether a string contains characters with normal
encoding, and if so, use 0x41. If the string uses character maps,
use 0x01. The output should otherwise be unrelated to the input.
This could be built after the filtering and tokenizer rewrite and
should be done in a manner that takes advantage of the other
lexical features. This sanitizer should also clear metadata and
replace images. If I ever do this, the file from issue #494 would
be a great one to look at.
* Here are some notes about having stream data providers modify
stream dictionaries. I had wanted to add this functionality to make
it more efficient to create stream data providers that may
dynamically decide what kind of filters to use and that may end up
modifying the dictionary conditionally depending on the original
stream data. Ultimately I decided not to implement this feature.
This paragraph describes why.
* When writing, the way objects are placed into the queue for
writing strongly precludes creation of any new indirect objects,
or even changing which indirect objects are referenced from which
other objects, because we sometimes write as we are traversing
and enqueuing objects. For non-linearized files, there is a risk
that an indirect object that used to be referenced would no
longer be referenced, and whether it was already written to the
output file would be based on an accident of where it was
encountered when traversing the object structure. For linearized
files, the situation is considerably worse. We decide which
section of the file to write an object to based on a mapping of
which objects are used by which other objects. Changing this
mapping could cause an object to appear in the wrong section, to
be written even though it is unreferenced, or to be entirely
omitted since, during linearization, we don't enqueue new objects
as we traverse for writing.
* There are several places in QPDFWriter that query a stream's
dictionary in order to prepare for writing or to make decisions
about certain aspects of the writing process. If the stream data
provider has the chance to modify the dictionary, every piece of
code that gets stream data would have to be aware of this. This
would potentially include end user code. For example, any code
that called getDict() on a stream before installing a stream data
provider and expected that dictionary to be valid would
potentially be broken. As implemented right now, you must perform
any modifications on the dictionary in advance and provided
/Filter and /DecodeParms at the time you installed the stream
data provider. This means that some computations would have to be
done more than once, but for linearized files, stream data
providers are already called more than once. If the work done by
a stream data provider is especially expensive, it can implement
its own cache.
The example examples/pdf-custom-filter.cc demonstrates the use of
custom stream filters. This includes a custom pipeline, a custom
stream filter, as well as modification of a stream's dictionary to
include creation of a new stream that is referenced from
/DecodeParms.