I think I learned something today while dealing with a particularly nitpicky code reviewer that's going to change how I write code forever. About a week and a half ago I picked up a ticket from one of our more senior developers who had been too busy to get around to this relatively insignificant work for a while. The task was to add some jitter to our retry logic to avoid creating a thundering herd problem. In .net I for sure would have been using something like Polly to handle retries, which would have had jitter built in, but alas, I am a python developer now so...🐍
I didn't figure it'd be too much work involved, I needed to add a random 0 --> .5 seconds to the sleep between retries, easy peasy. So I wrote one new unit test to cover my new work, submitted my PR, and waited for approval.
The first warning sign came when I saw that the senior dev who I'd taken the ticket from had used the 👀's emoji on the automated slack message announcing my PR. A few minutes later there were two comments, I won't bore you with the details but they both were basically, "why did you do it this way? here's how I would have done this." If you've been in this business for any amount of time you've run into a review like this. My initial reaction was annoyance, and I was ready to explain exactly why I was right and he was wrong, but I guess I'm mellowing out as I get older because I thought better of it and decided to leave it until the next day and approach it with a clear head.
With a rested brain I decided that rather than fight about it I'd spend the time trying out the suggestions, but first I wrote a couple more unit tests to make sure the behavior didn't change from the original implementation. Then I plugged the suggested changes and ran the tests. All of my new tests failed. I never trust myself in situations like this, so I rolled back the changes, ran the tests, verified they passed, and then, more carefully this time, implemented the suggested changes again.
The same tests came back with ❌'s
That was unexpected, but at least now I didn't have to fight about it. I explained in the thread that the suggested changes would cause a change in behavior (an exception we were previously raising would no longer be raised) no harm no foul, again I waited for approval, but instead he doubled down. In addition to his first suggestions I just needed to make another couple of changes, "flip >
to >=
here, remove a n + 1
there, and should be good to go".
I was annoyed again by the new pushback, but seeing how well things worked out last time I decided to try his suggestions and see if they broke the tests, and they did. So I posted again that the suggested changes will affect the existing behavior and waited for approval. The next day I saw that he'd gone on vacation, so I had to wait a few days.
When he came back he doubled down again ‼️
That's when I realized two things:
Most of the time this particular flavor of problematic code review features one person with strong, but possibly incorrect, opinions and lots of confidence (also known as, "loud and wrong"), and another person with less confidence who just wants the ordeal to be over as soon as possible. Typically I fall into the later group and assume that the confident reviewer is just smarter than I am, and the more quickly I implement their correct changes and get out of their way the less time they will have to realize how dumb I am...
I have got to stop deferring to these loud and wrong people.
This time, by building out tests to increase my own confidence I was able to defend against his attacks without stressing myself out and beating myself up wondering which of us was right. I could tell by his next reply that the shoe was now firmly on the other foot and he was starting to get flustered. He sent me a flurry of new changes, "You're right, that doesn't work. Okay last try from me! Try this, I tested this so I know it works!", he offered, earlier today. I plugged in his changes again, only two tests failed this time, but I'm going to give it time to sit, look over it again tomorrow, maybe write another test, just to be sure.