Picture this: It’s Thursday afternoon and you just got an email from your manager of development letting you know the team has completed the new features you’ve requested. Everything’s done and ready for testing.
This is great news because your users have been clamoring for these improvements and you want to get them implemented as fast as possible. You fire off an email to your quality assurance team leader, letting them know that it’s time to get rolling on testing.
You put your feet up on the table and relax because in a short amount of time, your QA army will have everything tested. The bugs will be ironed out by the legions of developers you have on staff. The features will be automatically pushed into production and your users flood every corner of the Internet singing the apps praises.
Sounds like a typical day in the office, right?
Probably not -- not even close. It's about as realistic as coding is portrayed in the movies, like that time Wolverine (Hugh Jackman) was a "hacker" in Swordfish.
(Above: How working on a computer never, ever is and probably never should be)
If you lead a team responsible for improving and maintaining software at any one of the thousands of small- to medium-sized companies in America, the above scenario probably doesn’t sound feasible if even remotely realistic.
The fact of the matter is companies across the country rely on software to function and succeed in business and a lot of the time, making updates, connecting systems, and pushing features is a real struggle.
For most companies, having just one resource dedicated to quality assurance -- let alone an entire team -- would be considered a luxury.
Most development teams are relatively small and consist of people that wear many, many hats. The person designing wireframes and conceiving screen mockups one day might be the person coding the screens for the frontend tomorrow. Today’s backend engineer might be tomorrow’s frontend developer and vice versa.
On small development teams, quality assurance is a team-sport with everyone taking a part to test different pieces of the software. Sometimes the QA is handled by a small team that already has a ton of other things going on. Testing and other work is carried out in tandem.
Oftentimes quality assurance is handled by the person responsible for building the features in the first place.
But a lack of resources shouldn’t result in a lack of QA. We can’t just skip software testing because we’re not optimally staffed for it.
Quality assurance is a vital part of software development. It’s the gut-check that lets us know how the application is functioning or in some cases, not functioning at all.
It gives us helpful warnings of things that may indicate trouble down the road. QA gives us performance cues as well as how the well intended solutions of today may be impacting other functionality we weren’t even focusing on.
At the very least, it provides a fresh pair of eyes on a process that’s part creative, part method. At the most, it can alert the danger of a software crash before it goes live.
In most small to medium-sized companies, quality assurance is a largely manual process that involves a lot of testing, checklists and time. As a software matures, features are added and the user experience becomes more complex, this kind of testing can grow to be an unwieldy beast.
As functionality grows, so grows the possibilities for an application to fail.
That’s where automated testing comes in.
Automated testing is literally software checking software for viability. It shoulders a largely automatable group of growing tasks and gives us time to do other things as well as confidence to forge ahead. It unlocks humans from performing manual tests and automates a process in a fraction of the time. Answers come quicker and confidence grows.
If nothing else, automated testing can save the hours spent manually testing a process that could have been done better automatically by a piece of software.
The real benefit of automated software testing -- the DevOps-minded goal of the whole venture -- is that it gives companies the ability to push updates to production continuously to keep up with consumer demand. Dev teams can make software updates confidently, push features frequently, and take advantage of true continuous integration and delivery (CI/CD).
It’s impossible -- or at least wholly unrealistic -- to achieve this feat without automated software testing.
Consider this hypothetical situation where an app has 10 web forms with 20 fields each located on different screens in the application that show information to both basic and admin users. A new feature is introduced to restrict functionality for basic users by hiding some fields on the fly. Conversely, the feature is tasked with opening up functionality for admin users on the same form to display even more information.
When the feature is delivered, impacting forms for varied levels of users, you want to test all those forms, right? There’s at least 200 possible fun and interesting ways those forms could break from a functional standpoint if you do the math. Above that, you need to make sure the forms are displaying properly for each user. A potential nightmare scenario exists where admin fields -- and probably data -- is exposed to basic users.
Sounds like thorough testing is in order. Checking each and every form, switching between user roles, and testing all those scenarios could take hours and involve several opportunities for human error.
Getting a level of certainty that everything works properly seems like a herculean task. Not testing features at all could be a potentially embarrassing predicament. What if the whole matter could be automated?
The good news is that automated testing is a reality. What’s more, going forward, as new features are introduced, you could test everything -- old functionality and new -- to make sure everything is working properly.
Automated software testing tools give project owners the ability to automate large parts of a normally manual process. Automated testing allows software owners the ability to test hundreds of aspects of software to test viability and see the impact new features have on old functionality.
With the normal testing handled, quality assurance time can be refocused into finding new and interesting ways for users to break a system.
Automated testing gives you the piece of mind that in the very least, you have a window into how the application is holding up. There’s a confidence in moving forward based on facts learned in the testing process.
And if there’s roadblocks, they’ve been identified and can be handled before features go live.
Unit tests are simple tests to see if parts of an application are functioning as they should. Developers write unit tests continually to test how components work and if they are doing what they should be doing.
Regression testing simply means checking to make sure that the things that worked before still work now. Changes made to a piece of software that inadvertently break other parts of the tool cause it to lose functionality, or regress, hence the name of the testing.
If you’re making an update to the software and you want to make sure the things you developed and tested before will still work, you need regression testing.
Regression testing can be automated and run as a normal part of the development process, providing important red flags of potential issues.
Smoke testing is all about simple tests that uncover big problems. If a new feature is going to break a big part of the system, such as the login screen working or the application launching at all, it’ll be uncovered during the smoke test.
Smoke testing deals largely with stability of an application.
This type of testing analyzes the impact changes will have on overall application performance with respect to execution, response time, reliability, and scalability.
As the name suggests, functional tests are done to check the viability of user interfaces, API functionality, database connectivity, and any other thing that would impact the functionality of an application.
To the extent that it’s possible, every aspect of an application should be tested after a feature is released, from the look and feel to the software’s ability to function without crashing.
As the software matures and features are added, the testing grows in time and complexity, leaving someone with the responsibility of checking each and every aspect of the application to ensure a good experience.
In time, that quality assurance checklist can get out of control. Testing the same features over and over can lead to fatigue and since we’re only human, errors. If your software is growing to keep up with user’s demand for features, the list of tests for quality can get out of hand.
Without automated testing -- or any testing for that matter -- implementation of new features can be risky. Pushing supposed small updates to production without having certainty of their impact on the rest of the system can result in unintended user experiences.
While it can seem like a big undertaking, it’s possible to start small and build incrementally until you’ve got a robust, holistic testing process. The process is additive in nature and every test you write builds upon the last until you’ve got a library of tests you can rely on.
Automated testing tools are as varied as the types of testing you can perform so it’s best to partner with a provider that has experience in DevOps consulting services. [link]
Not everything can be tested automatically -- some things require a human touch -- and that can be a positive thing. Going through the motions as a user can unlock new possibilities and uncover new user experiences.
If you have any questions, contact us directly. Automated software testing is in our DNA and we speak fluent DevOps. Let us demystify the process.