This book, in its third edition, is restructured and revised to trace the advancements made and landmarks achieved in the field. This book not only incorporates latest and enhanced software engineering techniques and practices but also shows how these techniques are applied into the practical software assignments. The chapters are incorporated with illustrative examples to add an analytical insight on the subject. The book is logically organized to cover expanded and revised treatment of all software process activities. New to This Edition : The contents and presentation of all chapters have been improved thoroughly. A new layout highlights the important concepts and theories in a box format, in order to help the students in learning process.
|Published (Last):||6 March 2015|
|PDF File Size:||14.49 Mb|
|ePub File Size:||8.76 Mb|
|Price:||Free* [*Free Regsitration Required]|
The conventional view of Software Testing The hardware and software worlds may seem poles apart, and in many ways, they indeed are. Despite the seemingly massive differences in the final product, they share more in common than you might expect. Computer engineers at places like Intel, just like software engineers, spend most of their time sitting at their desks, writing verilog code that implements the desired system behavior.
They then compile synthesize their code in order to generate lower-level outputs digital circuits and physical layouts. And finally, they write automated tests that exercise their SUT , to ensure that the code is functionally correct. Sound familiar? I know all this intimately, given my own past as a hardware engineer, and my later transition into software development. Way off. Raising the Stakes The elephant in the room in most software companies, is the perceived importance of testing.
In hardware, pre-silicon verification is a first-class citizen in the development process. Dedicated verification engineers earn 6 figure salaries, sit next to their RTL design counterparts in all planning meetings, and enjoy careers that are just as prestigious and lucrative.
Because the tapeout process is so expensive and time consuming, finding even a single bug can delay your product launch by months, and cost you millions of dollars in additional expenses. Or worse: finding a bug after your customers have already purchased and installed the chips, can result in extremely expensive product recalls. Even if the fix is a simple one-line code change.
The consequences of software bugs can certainly be disastrous. But at least the fix is extremely cheap logistically — code deployments and software patches are vastly faster and cheaper than manufacturing and distributing new silicon.
Hence why hardware organizations take testing much more seriously than comparable software companies. The results do speak for themselves. Hardware products that are in the hands of customers, have an order-of-magnitude fewer bugs. The percentage of bugs that are caught prior to release, is vastly higher in the hardware industry as compared to the software industry.
A Better Way It is tempting to say that hardware teams are better at testing, purely because of their greater financial investment. Such a view is unjustifiably optimistic about the current state of affairs, and pessimistic about our potential for improvement.
Over the past decades we have vastly improved our software-development practices and methodologies. Even though many programmers tend to short change it, testing-methodology is itself a skill set. One that is learned over time by an entire industry, at a rate proportional to its level of investment. And in this sense, the hardware industry is miles ahead when it comes to testing best-practices.
If you want to master the art of testing, talk to a hardware verification engineer. Word of Warning: The universe of all possible inputs and corner-cases is infinite. You will never cross the finish-line. All you can do is chase as much coverage as can be attained, with the amount of time and resources available. Manual testing cannot be code-reviewed on GitHub. Manual testing is subject to human error, whether due to oversight or laziness.
Manual testing is extremely time and labor intensive, when subjected to every single release. There might be specific exceptions where a test cannot be automated. But these should be the exception — not the norm. Anything important enough to test by hand, is important enough to build an automated test suite for.
Testing Two Inputs in Isolation! And even if the implementation is indeed decoupled at the time of writing the test, it can often evolve to become coupled at a later time. This is certainly a reasonable decision to make, depending on the particular project circumstances and the events being considered.
The only incremental change that needs to be checked in the second test is the return boolean. Just because something works fine for one event does not mean that it will work fine for all other events. Orders a beer. Orders 0 beers. Orders beers. Orders a lizard. Orders -1 beers. Orders a sfdeljknesv. One that can safeguard your release process against a wide variety of bugs and refactoring-induced errors. One where you feel comfortable deploying-on-success, no matter how invasive the changes are.
Write tests that are expected to produce a whole bunch of outputs, and then check that every one of them does show up. Write tests that are expected to produce no outputs at all. Each individual check may seem trivial and miniscule, but together, it all adds up. Test every corner case you can think of. White-Box-Testing to Enhance Test Coverage A quick primer: Black-box-testing : Testing an method purely on the basis of its specs, without any regard to the specific implementation used.
White-box-testing : Using the specific implementation details in order to guide your testing priorities. White-box-testing, when done right can greatly improve your test coverage, by better testing for correct behavior at key edge cases.
Better white-box-testing: From looking at the implementation, I know that it is using Hashing and Linear-Probing , in order to achieve the desired functionality. The most tricky corner cases occur when 2 different elements collide at the same array-offset. And especially when one of these previously-inserted elements is later removed, thus producing a tombstone entry.
Hence, in addition to the above black-box-tests, I will write specific tests with specific inputs that will trigger these tricky corner cases. The first approach may be adequate, if a sufficiently large test-suite is utilized. But the second approach is more likely to find bugs using a much smaller test-suite, by identifying and triggering the specific corner cases that are most at risk. The more controversial uses of white-box-testing arise, when it is used to weaken, rather than strengthen, the test suite.
Using white-box-testing as justification for neglecting certain corner cases, is a double-edged sword. On the flip side, using white-box-testing to enhance your test-suite, can pay huge dividends and make your codebase truly bulletproof. In theory, there is none. But in practice, there is.
In theory, unit tests can give you the exact same coverage you can get from integration tests. This is something hardware teams have learnt painfully over the years. Hence why no hardware project ever skimps on integration testing. No matter how thorough you try to be in your unit tests, you WILL find bugs when running integration tests.
On any project with sufficient complexity, you will never do a good-enough job. You will repeatedly build fakes that differ from the real component, in ways that turn out to be subtle but crucial. You will repeatedly fail to anticipate the disastrous emergent behavior that can result from seemingly innocuous changes.
I was once on a team staffed by brilliant and very accomplished developers that went all-in on unit tests and literally banned integration tests. We had near perfect test coverage metrics, but somehow, things would keep breaking in production every now and then. In ways that were sometimes disastrous. Nothing seemed to do the trick. It was only once we put together an end-to-end test-suite, did things finally improve. At which point, we went nuts with all sorts of new features and refactoring changes, and our test suite never let us down.
The above is no isolated example either. There was a great article written by the Rust compiler devs, about how they managed to produce a new stable release every 6 weeks, even though most other compilers have much longer release cycles. They credit end-to-end tests for much of their success.
They had indeed built a solid suite of unit tests — and yet, they still leaked a number of major bugs which were only found through end-to-end tests. By improving the effectiveness of their test suite, they were able to both prevent major production bugs and speed up their development cycles — a true win-win solution that we should all aim for.
Strengths and Limitations Ironically enough, despite all my proselytizing above, you will find that most hardware testing is done at the unit cluster level. Why is this? Surely this validates software industry norms to prioritize unit testing as well?
Context is vital here. In hardware, a unit cluster test can finish in minutes. Whereas integration full-chip tests take many hours, sometimes even days, to finish.
This is why most hardware testing is done at the unit level. Barely enough time for a dev to grab some coffee. Especially for their value in reproducing obscure error-conditions eg, network timeouts and other rare corner cases that are hard to induce in a real system.
However, the bread and butter of your test suite should be integration tests. Not only can you cover your entire codebase with a far smaller and simpler test suite, but you can also gain rock solid coverage of the nuanced interactions between different components. Not too much. Test every output for every combination of events? Test every possible corner case?
Software Testing: A Self-Teaching Introduction
Rethinking Software Testing: Perspectives from the world of Hardware
Software Testing Tutorial in PDF