The number of necessary columns was computed by producing a grid in different sizes and see if all columns were used. However, if there was two files and we tried to fit them in a 3-column grid, it would produces three headers and all three columns would be used; when trying a 4-column grid, the two supplementary headers would fill the third column and the fourth would be empty; so 3 columns would be used.
Now, when the grid fits into the terminal and the number of columns is exactly the number of files to display, it returns immediately instead of trying bigger grids.
Fixes GH-436.
This commit remove the extra space that was added between icons and file names in commit 128fadd, and adds an option to put them back.
Re-fixes GH-619 and fixes GH-541.
This commit makes adding icons to file names something that the file name renderer does, rather than something that each individual view does. This is now possible thanks to the previous commit a1869f2, which moved the option to do this into the same module. The repeated code has been removed.
It happens to fix a bug where the width of each column was being incorrectly calculated for the grid-details view, making lines slightly too long for the terminal because the icon wasn't being taken into account.
This commit changes the way the extended test suite is run.
Previously, there was a folder full of outputs, and a script that ran exa repeatedly to check the outputs match. This script was hacked-together, with many problems:
• It stops at the first failure, so if one test fails, you have no idea how many actually failed.
• It also didn't actually show you the diff if one was different, it just checked it.
• It combined stdout and stderr, and didn't test the exit status of exa.
• All the output file names were just whatever I felt like calling the file at the time.
• There is no way to only run a few of the tests — you have to run the whole thing each time.
• There's no feel-good overall view where you see how many tests are passing.
I started writing Specsheet to solve this problem (amongst other problems), and now, three and a half years later, it's finally ready for prime time.
The tests are now defined as data rather than as a script. The outputs have a consistent naming convention (directory_flags.ansitxt), and they check stdout, stderr, and exit status separately. Specsheet also lets simple outputs (empty, non-empty, or one-line error messages) can be written inline rather than needing to be in files.
So even though this pretty much runs the same tests as the run.sh script did, the tests are now more organised, making it easy to see where tests are missing and functionality is not being tested.