A good question. In the marking scheme, no marks are assigned to automation, so I guess the answer has to be yes (it is sufficient) and no (you won't be docked marks). Still, you may want to look for white-box testing tools for the language you're using.
The idea is to analyze and test only the code that is executed from the time after the transaction has been identified (e.g., that the line in the Merged TSF begins with "CRE" and is therefore a "create service" transaction) up to the completion of processing for that transaction.
So you should include all the code to check the constraints of the transaction and update the internal copy of the Central Services, but there should be no file input/output involved in the part being analyzed. It's quite possible that the code to be tested will be a single method.
Now, the tests you make to cover the analyzed parts will involve running the whole program, so you will need to specify inputs and expected outputs for them. But it's likely that you'll be able to share a single Central Services File among most or all of your coverage test cases (similar to A1, where a single Valid Services List with one service could be used for most or all of the Front End test cases), so that shouldn't be too annoying.
In A5, you're only testing the Back Office, so the constraints to be tested are the Back Office ones. You can test the Back Office alone by making test inputs to the Back Office. You don't need to run the Front End for this assignment. (If you're confident in the correctness of your Front End, I guess you could use it to convert the relatively user-friendly Front End input to the rather annoying Transaction Summary File format, but it's certainly not required.)
You need to analyze all the code executed to implement the transaction, which in your case sounds like the processing function for the transaction. If the code for the transaction involves calling another method, then that means the code executed in that method as well.
(In general, if you ignore "inner" method calls, you could defeat the purpose of white-box testing by moving all the code in the processing method into a new method, and have the body of the processing method consist of nothing but a call to the new method. Then you could trivially cover every part of the processing method, because it would be one line of code! The point of white-box testing is to exploit the structure of the code to guide the testing process; there's no reason to ignore structure just because it lives in a different method.)
You should check that the outputs are correct when you run the tests (these are tests after all), but it's difficult to specify all the changes to the output files you are checking—so you can just describe that output, rather than give it as actual file contents, if you wish.
Good question. In a real-world setting, we might want to see the inputs to the unit (such as a specific method that processes "create service" transactions) rather than the inputs to the whole program. The listing you'll turn in, however, should have the inputs to the whole program (most importantly, the MTSF). This is partly for marking reasons: everyone's Back Office will be structured differently, so the TAs would need to figure out how to map your unit inputs back to the MTSF. In addition to taking extra time, that's an opportunity for confusion that could lead to unjustified loss of marks. So we're asking you to do that mapping yourself.
I don't think this is too onerous, since most of the work in the Back Office should be around processing specific transactions, and you've already had to figure out how to extract information from the MTSF; this is only asking you to do that process in reverse.
The non-marking reason is that going from a unit test case to an integration test case is something you might need to do in the "real world", too.
This is input partitioning at the level of the unit, not the input to the whole Back Office. The relevant inputs would be the inputs to your (for example) "create service" method, and the relevant code should be the same as for the other white-box methods, as described in answer 2 above.
That depends on what you mean by "validate". The Back Office can "assume" that numeric fields are numbers; while it's good practice to include assertions to check that, such assumptions aren't part of white-box testing. That is, you can assume that numeric fields are numbers, and any path that might be executed when a numeric field isn't a number can be ignored. (But be sure to mention this in your report.)
On the other hand, some constraints, like "service number doesn't already exist", are tied to the specific transaction the Back Office is processing (for most transactions the service number is required to exist, for a "create service" transaction it's required not to exist), so those paths are considered part of the "create service" functionality and must be path-tested.
You're all so young...back in my day, we handed in coding assignments on paper, so we all knew what a code listing was.
A source listing is the actual lines of code. You can take a screenshot in your editor or IDE if you want. For some kinds of white-box testing, such as path coverage, make sure your editor shows line numbers, allowing you to refer to specific line numbers in your report.