To begin, we first took the time to thoroughly understand the functionality of the starter code, since we would be building off of it. This included going through the simulation and run functionality and understanding how packets were sent and printed. Then, we decided to follow the recommended approach provided in the "Implementation Strategy" of the instructions, implementing the features in the suggested order. We found that following those outlined steps and working through the tests incrementally set a good pace for the project.
On the sender side, we start with an initial window size, then later determine whether to increase this window size based on incoming packets and our current threshold. When determining the design choices that we would take regarding the specifics of our transport protocol, we first followed the psuedocode in the slides and found that it didn't pass the tests. As a result, and using the performance tests as a guide, we iteratively modified the threshhold and window size variables to achieve optimal performance. We maintain a list of acks that we are awaiting and a queue for messages needing to be transmitted, which gets updated accordingly. We also support receiving different mesage types, such as acks or retransmit requests. Our sender also periodically checks for sent packets that have potentially timed out while we are waiting based on our rtt log.
On the receiver side, we maintain a queue for messages to be printed and determine whether we are able to print a message based on its consecutiveness. We also maintain a set of sequence numbers we have seen. This design handles any jitter that may occur in the network.
A characteristic of this project that made it so challenging was the fact that it was so open-ended. The instructions did not prescribe how to handle adverserial network conditions, whether they were drops, packets arriving out of order, corruption of packet contents, varying latencies, and varying bandwidths. This meant that we had to spend a lot of time understanding and dissecting the tests to see the expectations for each varying network error and/or envronment. As a result, the project consisted of many hours of trial and error.
Additionally, the challenge of writing efficient code to handle all of these network conditions was another major challenge.
We tested our code extensively against the provided test cases, which was the main benchmark/determiner for whether we were correctly implementing our features. We also relied on log statements to gain an intuitive sense for how our protocol works, especially with adverserial circumstances.