This introduces a better set of defaults for large databases. I've
experimentally determined that it results in much better throughput in a
couple of scenarios with large databases, but I can't give any
guarantees the values are always optimal. They're probably no worse than
the defaults though.
This is a tiny tool to grab the GitHub releases info and generate a
more concise version of it. The conciseness comes from two aspects:
- We select only the latest stable and pre. There is no need to offer
upgrades to versions that are older than the latest. (There might be, in
the future, when we hit 2.0. We can revisit this at that time.)
- We use our structs to deserialize and reserialize the data. This means
we remove all attributes that we don't understand and hence don't
require.
All in all the new response is about 10% the size of the previous one and
avoids the issue where we only serve a bunch of release candidates and
no stable.
NATSymmetricUDPFirewall actually is not NAT at all, but means the machine has a global IP address and an UDP firewall in front (RFC calls it Symmetric UDP Firewall). This is punchable fine, both theoretically and also practically in testing.
This changes the on disk format for new raw reports to be gzip
compressed. Also adds the ability to serve these reports in plain text,
to insulate web browsers from the change (previously we just served the
raw reports from disk using Caddy).
* lib/model: Don't panic on failed chmod-back on directory (fixes#5836)
This makes the "in writable dir"-wrapper log chmod-back errors instead
of panicking. To do that we need a logger so the function moved into the
model package which is also the only place it's used. The tests came
along.
(The test also exercised osutil.RenameOrCopy like some sort of
piggybacking. I removed that.)
This adds a set of magical environment variables that can be used to
tweak the database parameters. It's totally undocumented and not
intended to be a long term or supported thing.
It's ugly, but there is a backstory. I have a couple of large
installations where the database options are inefficient or otherwise
suboptimal (24/7 compaction going on and stuff like that). I don't
*know* the correct database parameters, nor yet the formula or method to
derive them by, so this requires experimentation. Experimentation needs
to happen partly in production, and rolling out new builds for every
tweak isn't practical. This provides override points for all reasonable
values, while not changing anything by default.
Ideally, at the end of such experimentation, we'll know which values are
relevant to change and in what manner, and can provide a more user
friendly knob to do so - or do it automatically based on the database
size.