Up at 5AM: The 5AM Solutions Blog

Ethically Skipping the Tests?

Posted on Thu, May 26, 2011 @ 01:29 PM

5AM has a written code of ethics summarized by the phrase: Think | Do Well | Be Good | Stand Up. It's not an empty piece of corporate prose. At company meetings we have employees provide personal accounts of those values and how they impact their work within the company. It appears in the standard signature for emails. We depend on those values as we go about our work and hold ourselves to high standards. I believe our ethics are a key contributor to client satisfaction and our record of having a 100% referenceable client list. In my 4+ years here, 5AM has walked the walk.

I am also personally an IEEE member, which has its own Software Engineering Code of Ethics (This is also the ACM code). While it lacks an easy to remember summarizing phrase, it does list 8 principles, document professional ethical aspirations, and detail some 80 concrete rules. I personally find the codes complementary to each other: 5AM's code speaks to software and non-software activities, and the IEEE code digs more deeply into the SE part of our work.

It is with this background that I found myself confronting the following questions: When, if ever, can a software engineer systematically skip writing tests for the code base they are developing? What are the ethical considerations?

First, some context. Imagine a client that explicitly accepts the risks associated with forgoing unit tests, a regression suite, and other testing practices. Their position is that they would rather have buggy software sooner, or that the software is not likely to need long term maintenance so a regression suite is not needed. Speed is of the essence, and failures are a price the client is willing to pay. Their business is set up to expect and respond to the inevitable production bug. In other words, imagine this client believes their "no testing" policy is a considered choice, not a myopic directive.

Software does fail in production quite often, and in many cases such failures are well tolerated - it depends on the context. Regulatory and legal requirements exist for some classes of software (medical devices, safety-critical software, etc.) and not for others. It's almost unthinkable to imagine software that directly impacts human lives being written without tests. On the other hand, the Linux kernel is shipped without accompanying automated tests (the Linux Test Project isn't concurrently shipped.) Instead, Linux depends on a wide pre-release ad-hoc testing process. With great success.

Both the 5AM and IEEE codes bear on our scenario.

5AM
Do Well: We will explain our development processes ... so that our customers will have reasonable expectations for the finished product.

Stand Up: We are committed to developing software that performs as expected and that is bug-free upon delivery.
We publish our software development process, which includes testing practices as standard fare. Our process also says that team-specific modifications are not only allowed but expected. Improvement and change in response to client environment is a good thing. So I think we're ok from the Do Well perspective. Stand Up is more tricky. Getting rid of testing for this client would make the software "perform as expected" but won't be "bug-free" when put to production because speed is more desirable than correctness for this client. Perhaps we can relax the idea that production = delivery and say that planned production remediation is the ultimate threshold for delivery correctness. Is this moving the goal-posts after the rules are established?
IEEE
1.03. Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to the public good.

3.01. Strive for high quality, acceptable cost and a reasonable schedule, ensuring significant tradeoffs are clear to and accepted by the employer and the client, and are available for consideration by the user and the public.

3.05. Ensure an appropriate method is used for any project on which they work or propose to work.

3.10. Ensure adequate testing, debugging, and review of software and related documents on which they work.
1.03, 3.05, and 3.10 speak directly to testing as a responsibility of a software engineer. But notice the words "appropriate" and "adequate" as qualifiers. It seems that our prospective client is making a 3.01 style "significant tradeoff" in favor of speed over testing. Can we accept that tradeoff as eliminating our ethical obligation to write tests? Should we? I think this is a tough spot. Presumably the client is in the best position to evaluate the consequences of their policy. If they accept the risk, I believe we are ultimately within our ethical bounds to forgo testing.

The final point I'll make is that there is good reason, and good data, to back up the notion that good testing practices in fact speed up development. So while we may be within our ethical bounds for the software engineering, we still have an obligation to Stand Up and make our case for testing as part of the software development process.

GET OUR BLOG IN YOUR INBOX

Diagnostic Tests on the Map of Biomedicine

MoBsmCover

Download the ebook based on our popular blog series. This free, 50+ page edition features updated, expanded posts and redesigned, easier-to-read maps. 

FREE Biobanking Ebook

Biobanking Free Ebook
Get this 29 page PDF document on how data science can be used to advance biorepositories.

 Free NGS Whitepaper

NGS White Paper for Molecular Diagnostics

Learn about the applications, opportunities and challenges in this updated free white paper. 

Recent Posts