Tuesday, September 14, 2021

Tests that are worth testing produce bugs that are worth fixing

Conclusions & Consequences:

  • If  a bug is logged based on scripted or automated test it should be fixed as worthy bug. 
  • If we consider bug as not worth fixing, we should consider test not worthy and we may need to delete this test
  • With the time if we are not fixing bugs from scripted or automated test we will have zero scripted and automated tests

 

Monday, August 27, 2018

Elastic testing

Have you ever been in situation where release deadline is firmly set, but no one knows when the features will be ready for test?

If "Yes" you have been in typical "elastic testing" situation.

Here are some characteristics of "elasticity"
  • test time is not set when release date is set
  • even test time is set development is delayed and release is not rescheduled
  • bug fixing time is not accounted

Monday, November 13, 2017

The best bad practices


It seems it is about the time to share the list of my "favorite" bad practices:
  • Sporadic bugs are ignored. This compromises user experience and test automation.
  • Performance issues are ignored as sporadic. This leads to performance degradation over time.
  • Test environment does not match Live environment - "It is too expensive". This compromises test automation scalability and performance testing.
  • When problems(not clear bugs) are reported by QAs they are ignored. If latter the same are spotted by client or manager everyone are starting to work on them.
  • There are no set dates for the development to complete the features, but release dates for the product are set. So Testing is expected to be "elastic".
  • Automated tests are blamed to be slow while actually application is slow.
  • UI changes are done without consideration that this will impact automated tests.
  • Critical and blocking bugs does not reset testing cycle at all.
  • There is no stable release branch for QA to test. QAs are forced to test on master branch where tens of commits are done every day.
  • Test blocking problems are not fixed for days.
  • QAs are overloaded with many parallel releases.
  • QAs evaluation(salaries, bonuses, promotions) are in the hands of development manager. Guess what is the result of this.
  • Management cares only for the release dates, but is not interested in quality. The quality for them is like religion. (If interested on this topic read Rex Black) 
  • Management improves development process by transferring more manual work to QAs - for example automatically putting stories and bugs to ready status without they are actually ready for test
And "yes" this list will grow :-)

Agile release flow

Agile teams strive to find the best release organization that will help them to release quality products in relative short iterations.

Without claiming that there is a one holy grail for every team I would like to describe one of the best I have used in recent years

Here is a simplified diagram:

And below is some more details:

  • Main branch is the place where developers integrates their stories and fix most of the bugs. Here also most/some of the testing is happening.
  • There may be developers branches for more complex stories. However at the end they are merged in the main branch
  • Release branch is the place from where the releases are done.
  • Features complete(FC) milestone is important - at this point we are ready to (re)create our release branch. We need to have well defined FC criteria in order to be consistent in our releases. Here are some ideas for good FC criteria:
    • Stories are tested and working for the main scenarios.
    • There are no known P0 and P1 bugs
    • Automation tests are stable and green
  • When testing in the release branch and P0 and P1 bugs are found they should be fixed in both branches.
  • When FC is achieved and testing is focused on release branch developers are free to work in the main branch for the next release. In the same time testers are having calm time without commits in the release branch(except for P0 and P1 bugs).
  •   Handling inevitable hotfixes. There 2 types of hot fixes:
    • the one that occurs right after release and before next FC. This is easy to handle. Just fix the problem in the release branch and do the release.(This is hotfix in the above picture)
    • the one that occurs right after FC and before release. This one is more complex because release branch has too many new things and may not be ready for release. In this case we need to recover tagged version of the last release and apply fixes there and then to return back to current release process. If we keep FC to release time shorter we can fully mitigate this type of hotfix.

So that is. It simple and it is working.

Friday, June 2, 2017

Team Architect vs Test/Software Architect

I guess most of you have heard about job titles like Test Architect, Software Architect etc.
It is a very prestigious position and a goal for many engineers.

My current thought is about virtual role of a team architect. To large extend it is related to my another blog. And here are the random spark on this topic

Team architect :
  • is focused toward people instead of technology
  • is trying to facilitate people integration instead of modules integration
  • is looking for problems in human communication instead of protocol problems
  • is motivating people to grow instead of updating obsolete module
  • is working to keep team up and running instead of keeping server running
  • is trying to balance workload of people instead of load of web server
  • ... and so on

Peace

Do you really have working test automation

Many companies/teams/people are bragging about their automation. However I have few questions that help me to find the truth
  • How many test cases do you have?
  • Can you show me the "morning run" results?
  • How are you handling bugs that blocks automated tests?
  • What is average failure rate for the daily runs?
  • How you are dealing with flaky tests?
  • How much time are you spending in analyzing and support of the automation tests?
  • What kind of CI your team is using and how your tests are integrated with it?
  • What is your testing lab?
Note that there is no right answer for each question, but yet those questions work well to find the truth :-)

Tuesday, March 21, 2017

How to calculate the level of Management Quality Commitment?

I just spark a thought about this:

MQCL = QARL

Where:

MQCL is a Management Quality Commitment Level
QARL is a Level where QAs are Reporting in to an organization or department. For example:

  • 1 - Operational manager
  • 2 - Senior managers
  • 3 - Directors
  • 4 - VPs
  • 5 - CEO
Seems like this becomes more accurate when company becomes bigger. 
Note: Quality commitment is not equal to product quality