...
File versioning in the L1 pipeline: The reason for this is that a run will usually be processed multiple times piecemeal (run cut by a downlink/data transfer, missing data showing up, etc). While we only process a contiguous chunk of data once, we always merge up everything every time we process a piece of a run. It's the latest version of a the merged file that will be the most complete and which will be of (most) interest. The infrastructure (Data catalog etc) will need needs to take this into account.
...