That was an oversight on my part. For simplicity, let's say that the Back Office always sets the capacity to 30.
The Back Office is run once at the end of each day. It reads the entire old Central Services File into an internal copy, processes all the transactions from the Merged Transaction Summary File to update the internal copy, and then writes out an entire new Central Services File from the internal copy. There is no online or interactive aspect to the Back Office.
After each session of a Front End, upon logout, it writes out a Transaction Summary File. You can assume one session per day (one login, one logout) for each instance of the Front End. However, while that potentially simplified your Front End by allowing you not to worry about producing multiple TSFs, this assumption does not directly simplify the Back Office.
The reason is that the Back Office must deal with multiple instances of the Front End. Even though you can assume each instance has one session per day and produces one TSF, multiple Front End instances mean that, at the end of each day, there will be multiple TSFs generated.
However, the requirements for the Back Office say that it takes in a Merged TSF, which means that the Back Office program itself doesn't have to worry about this! Which leads to the next question...
Assignment #6 will probably involve integrating the Front End and Back Office. So in a couple of weeks, you will need to write some code to do the merge. But as specified, the Back Office takes in only the one already merged file.
There is none specified. I would use end-of-file detection to handle that, but if you would prefer an explicit end-of-file marker (like the EOS line for the TSFs), you can invent one as long as you document its form.
That's a good question. But I don't actually think it's a problem. According to the requirements for the Back Office, there is already the following constraint:
– a created service must have a new, unused service numberBecause you are processing the transactions from one Merged Transaction Summary File, in the order given in that file, once you have created the service the first time, the service number exists. So when you process the second CRE transaction, it will violate this constraint, and you will log an error and ignore it.
Good question. To quote the project requirements document, "The Back Office uses only internal files, and therefore can assume correct input format on all files. However, values of all fields should be checked for validity, and the Back Office should immediately stop and log a fatal error on the terminal if any is invalid."
Since the Old Central Services File was itself produced by the Back Office, it may be reasonable to assume that its values are valid. Certainly you can assume that they are well-formed. On the other hand, if you are not checking the values, then it's still best practice to document the assumptions you've made about those values using assertions, so in the end you'll be doing roughly the same thing.
(I guess you mean the New Valid Services file, the Merged Transactions file is input to the Back Office, not output.) On a fatal error, no output files should be or need to be produced.
In the Back Office, whenever a transaction would cause a constraint to be violated, you should log an error message and ignore the transaction.
The Back Office is processing the transactions from one Merged Transaction Summary File one by one in the order in that file. If a transaction would cause a constraint to be violated, then you should log an error and ignore the constraint. So the answer is, if a transaction would cause the balance to become negative, you should log an error message and ignore the transaction, then go on to process the remaining transactions.
The real-world problem of doing something more sensible in this situation is beyond what we're doing for this project.
Probably it will be easiest if the Back Office program takes four arguments:
The idea is that the Back End should process any transactions that could have been produced by a functional Front End, including transactions that violate constraints enforced by the Back End, but should stop for a TSF that doesn't match the constraints listed under "Transaction Summary File" in the requirements.
So, if the error is the result of the Front End not meeting the requirements, the Back End can exit with a fatal error. For example, if the Front End produced the TSF
ZZZ asdfjkl;the Back End could exit, because ZZZ is not a valid transaction code and should never have been produced.
If, on the other hand, the Front End met all its requirements, then the error is not fatal. For example, if the service name in a DEL transaction line doesn't match the service name in the Central Services File, the Back End should log the error but keep going. The Front End doesn't have the service names, and the requirements for the Front End don't include a constraint about the names matching, so a valid Front End can produce that transaction line, and the Back End should "process" it—by doing nothing, because the names don't match.
That was an oversight on my part, which means you can either
Whenever more than one Front End exists, the merged TSF will contain multiple EOS lines. (The last line of the MTSF will still be an EOS, because it has to come from the last line of a TSF.) It's possible there will be a "block" of several EOS lines—that could happen if one Front End had no transactions, so its EOS would appear immediately after another Front End's EOS.
So, yes, the Back Office should keep reading until the end.
To answer a possible followup, I don't think the requirements say what to do if the merged TSF doesn't end with an EOS. That should never happen, provided your Front End is correct and the MTSF is actually the merged output of several TSFs, so I would just ignore this possibility.