exa/xtests/outputs/files_grid_8col.ansitxt
Benjamin Sago 61c5df7c11 Use Specsheet for the extended tests
This commit changes the way the extended test suite is run.

Previously, there was a folder full of outputs, and a script that ran exa repeatedly to check the outputs match. This script was hacked-together, with many problems:

• It stops at the first failure, so if one test fails, you have no idea how many actually failed.
• It also didn't actually show you the diff if one was different, it just checked it.
• It combined stdout and stderr, and didn't test the exit status of exa.
• All the output file names were just whatever I felt like calling the file at the time.
• There is no way to only run a few of the tests — you have to run the whole thing each time.
• There's no feel-good overall view where you see how many tests are passing.

I started writing Specsheet to solve this problem (amongst other problems), and now, three and a half years later, it's finally ready for prime time.

The tests are now defined as data rather than as a script. The outputs have a consistent naming convention (directory_flags.ansitxt), and they check stdout, stderr, and exit status separately. Specsheet also lets simple outputs (empty, non-empty, or one-line error messages) can be written inline rather than needing to be in files.

So even though this pretty much runs the same tests as the run.sh script did, the tests are now more organised, making it easy to see where tests are missing and functionality is not being tested.
2020-10-17 21:12:18 +01:00

6 lines
356 B
Plaintext

1_bytes 2_MiB 4_KiB 6_bytes 7_MiB 9_KiB 11_bytes 12_MiB
1_KiB 3_bytes 4_MiB 6_KiB 8_bytes 9_MiB 11_KiB 13_bytes
1_MiB 3_KiB 5_bytes 6_MiB 8_KiB 10_bytes 11_MiB 13_KiB
2_bytes 3_MiB 5_KiB 7_bytes 8_MiB 10_KiB 12_bytes 13_MiB
2_KiB 4_bytes 5_MiB 7_KiB 9_bytes 10_MiB 12_KiB