Creating maintainable software is absolutely critical to any software development effort.

Some people will always advocate for whatever gets new software written quickly. However, if maintainability isn't a goal from the very beginning then that quickly written software will become fragile, cumbersome, and difficult to work on. This can significantly slow down a project. It is important to document both the business requirements that caused the software to be needed in the first place as well as its internal structure and anything relevant to the future maintenance of that software.


If it can be scripted it should be scripted.

Programming can become very repetitive work. I’ve found that anytime I go through a complex task I usually end up having to do it again at some point. Having a script to run for commonly executed tasks can be incredibly powerful and save a lot of time.


Deploying a new revision of a website should be a one step process.

The deployment process should involve the execution of a single script that automates all of the steps necessary to upgrade a production website to a new revision. This means checking everything out of revision control, compiling anything that needs it, uploading the build to the relevant production machines, stopping any production software such as webservers, databases, and messaging servers, running database schema modifications, data migration scripts, and finally restarting the production server software only if all of the above steps completed successfully. This process would also run through any automated tests which could include running test data through various APIs before launching the new software in production and/or testing the production system once it comes back online.


A bug tracker and wiki are necessities.

I want it to be trivial for users throughout the company, if not customers themselves, to file bug reports. Those reports should then be assigned to a developer who can then clarify the problem, update the issue, and then either work on or reassign the issue as needed. If a task isn't in the bug tracker then you can be assured that it is not being worked on, no matter how minor. It is amazing how much time "just one little thing" can end up taking. Then the problem and its solution are lost without anyone else ever knowing about them.

A wiki lets people collaborate on documentation instead of trying to keep track of things by emailing different versions of documents back and forth, which always results in lost ideas and information and doesn't keep people on the same page.

My favorite bug tracker and wiki are Jira and Confluence from Atlassian.


Compartmentalize services behind network available APIs.

It can be very valuable to expose services through APIs on a company's internal network. This can be more flexible and offer a greater level of consistency across multiple pieces of software when they are all viewing the same service. This contrasts with the practice of developing custom libraries and then deploying disparate pieces of software throughout a company which are likely compiled against different versions of those libraries. One piece of advice I would offer is to not design your network available APIs as if there isn't a network between the API and the caller. If you are trying to ignore network specific problems then I encourage you to read the paper "A Note on Distributed Computing."


Don't reinvent the wheel.

There are many Open Source projects available for use in commercial software. I make use of them whenever possible. If using a library I usually wrap them with an API that meets my usage needs so that I'm not forever locked into using that version of the library. This also gives me the ability to swap it out with something else if a better solution presents itself in the future.