The team at my new job has grown pretty rapidly as more resources have been devoted to pushing things out the door, and as we've done some reorganization. Along with this have come some growing pains related to adding new developers on a smaller team. Naturally a few lessons come with this.
Code review is good
The code, let's just say, is idiosyncratic; it's prompted a long-overdue look at code review. It kind of surprises me in 2015 that any company doing software development doesn't institute some form of code review. We're primarily a hardware company, I guess, but I really don't see this as much of an excuse.
Along with code review comes a bevy of other benefits: consistency in coding style, language selection, and, presumably, some kind of design. It's this last part that really matters. We're all idiosyncratic; we all write code differently; we all like our own tools; we all have preferred parenthesis placement. But deviation from the accepted standards and design means that code becomes harder to read, and, for me at least, much harder to test.
Commit one thing at a time
This is a bit of a sore point for me, but I confess I've violated this principle many times in my programming career. (While I'm no git expert, I will say that one of the niceties is that you can branch very easily if you get distracted in the middle of a task--or package up subtasks in multiple commits instead of one giant one.) Having said this, we've had to review risky commits with intertwined orthogonal concerns: if we find that one of the fixes breaks backwards compatibility, we might brick a system. It's rather inconvenient if we can't back it out because it cannot be extricated from another essential fix.
Moving from a mature product to a more mature one has made me appreciate all the more the practice of disciplined coding.
Testing is simulation
This one is probably obvious, but I hadn't really thought that hard about it until I realized that our research team and I needed the same infrastructure to solve different (albeit related) problems: they need to validate (for example) statistical models based on particular inputs and outputs, while I need to be able to validate whether particular hardware and software systems are playing nicely with each other.
It turns out the functionality that we use for both is identical. The code, however, has been written in such a way that this task is difficult for both of us. If the software had been designed from the ground up with this in mind, we'd probably have a well-developed data layer and a vertically-integrated set of hooks for testing, too.
The good news, I suppose, is that we're working on all of these things; it's just going to take a bit of time to get it all integrated into the nuts and bolts.