Note: in this article, I raised a question without answer. Rather, I wanted to point out that we have a problem to work on and there is a large space for new innovations and opportunities.
The problem is: how do we measure an engineering practice. How do we prove it works/doesn’t work; how do we prove it’s superior than other ways; etc.
- How do we measure hackathon? How do we prove it works; how do we prove that hackathon brings values which we won’t get in other ways; etc.
- How do we measure the return of investment in unit test?
- How do we measure DevOps? How do we prove that it’s superior to other models
- … (the list can go very long)
Take hackathon as an example. There are a lot of arguments in favor of it. Most of them are subjective, about personal feeling and observation, and appeal to perception and using reasoning rather than numbers. For example, some said “lots of good ideas emerged from hackathons”. Then the question would be: would these ideas have been emerged anyway in other ways? How to prove that some of these ideas would never have emerged without hackathon? For another example, some said “hackathon boosts team morale”. Then the question would be: how do you measure the boost? Are you sending a survey to the participants? In companies, such surveys are not to be trusted. People are afraid of say negative things, especial to the things arranged/sponsored/initiated by top executives. If someone say “we found good hires from hackathon”, then the question would be: could you have found good hire equally effectively (if not more effectively) through other ways? Some arguments were logically flawed: “Facebook does hackathon, Facebook is successful, so hackathon must help make a software company successful”.
Talk about unit test. If someone uses “unit tests will help cut bug count by up to 90%” to support unit test, the number will get questioned really hard, since no two projects are the same. In many other fields, such as in sociology, we can exam tens of thousands of similar samples and use statistical regression to prove that “being exposed to bilingual environment from age 0-6 will help boost the annual income at age 30 by 2.5%”. That kind of study and numbers in sociology is sound and solid. But we can’t do that for software engineering. We don’t have thousands of similar software companies/projects to study and come up with some numbers like “companies that heavily invested in unit test get boost in their revenue by 5%”. Many people (including me) strongly believe in unit test and practice it in our daily work, not because we were convinced by some statements like “smoking increases the chance of cancer by x%”, instead, because we tried it and found it was helpful.
Look at DevOps. There were some data to prove that’s a good idea. But these data are not that solid to pass strict exam. “Job satisfaction increased by 10%” — according to what? Some internal survey? We know that people “engineered” such surveys to get the result the way they want to see. “Release cycle shortened by 50%” — could that also have been equally effectively achieved without DevOps? “Live site incidents reduced by 30%” — May be the team was already on steady trend toward higher efficiency. It’s a common logic fallacy that people mistakenly think when two things happen in sequence, the first one is the cause of the second one.
By questioning these, I’m not arguing that we shouldn’t be searching for and trying out new ways in software engineering to be more effective and produce better outcome. We should never stop searching and trying and getting better.
When we think we are becoming more data driven to adopt new practices, our approach of advancing engineering practice is still very pragmatic today: some people has got ideas and go ahead trying. Sometimes it works, sometimes it doesn’t. When it works, they tell people. Others have tried, see the benefit, and tell more people. More people follow suit and over the time, it became common and popular and eventually everybody adopts it as a standard way of doing things. When we want ourselves to become more data driven, we should realize that the data we use are often weak evidences, misuses of statistics or subject to interpretation.