Hello everyone, my name is Brian Okken. Welcome to the Python test podcast. Today I'm going to answering a question. Well, that feels pretty cool, that's got to be the first time I've ever had a reason to say that phrase. Anyway, questions, comments, reactions to the podcasts you can send then on the contact form on pythontesting.net, Or you can send them to me on Twitter. The show is @testpodcast, and I am @brianokken. Anyway, oh, hey, before we get started, I'd like to thank a local Portland band called And And And for letting me use some of their music in the show. I found them at andandand.bandcamp.com. Oh that's a mouthful. You can find out more about them at their website andandandmusic.com. So the question I'm going to address is a question received via Twitter, and it's: are you going to cover stuff like why testing? You know, actually that threw a wrench into my assumptions. At first I thought, well, isn't it obvious why you would test? But then I thought about my experience myself, and watching my colleagues over the years, and I guess it isn't obvious. It was obvious to me why someone should do testing of course, but it wasn't obvious why this someone should be me that should be doing the testing, and how I could benefit. It was several years into my career I think before I started incorporating automated testing into my development process, and I still encounter developers who agree that testing is important but that it should be done by somebody else. It's also common for testing to be done after development, which is one of the reasons why I'm doing this podcast, to try and stop that. So, I'm going to talk about that today, but I'm going to expand the topic. So really what I'm going to talk about is why testing, what are the benefits, why automated testing over manual testing, why test first, and why do automated testing during development, and why test at the user API level. Some of the reasons are business related and very practical. Some are very personal reasons, but my favourites reasons are the pragmatic reasons that appeal to the developer and artist in me. Current funding for the podcast comes from sales of Python Testing with Unittest, Nose and Pyest. You can find that at pythontesting.net/book. I've also created a Patreon campaign. It's at patreon.com/okken, that's o k k e n. But since my last name is a bit uncommon, I've also included a link to the Patreon campaign at pythontesting.net/podcast. I'll be talking more about that at the end of the show. Ok so back to these questions, why testing. First, I want to set it up. So, in order to maximise the benefits of software testing, and especially automated software testing, I'm going to make some assumptions. They are assumptions based on how I like to do projects. First, I'm going to set up my assumptions on how testing is used. This is because the benefits you get from testing largely depend on how you are incorporating testing into your development process. So here's my ideal setup for maximising the benefits. First, a thin GUI design built on top of the same API that the customer has access to. This allows for complete functional testing through the user visible API, mostly. I also often incorporate a debug API, allowing the project to interrogate the internals of the system, as necessary for testing. This debug API is not visible to the users, or the end customers. It's turned off in release builds, but available during testing. So, thin GUI, complete testing possible through the functional -- functional testing through the API, debug if we need it. Next, testing is focused on functional testing. Tests and software are developed with -- in tracer bullet fashion to reduce or eliminate the need for mocking. Next, critical subsystems are identified, to have more formal API's and more rigorous subsystem testing. So that means most of the testing is going to be at the functional level, at the API level, but we are always going to have some critical systems, subsystems that needs some focussed subsystem testing. Tests are developed before the code, but not too much before. Requirements are fleshed out through the writing of tests, and the API is developed through the writing of tests. Unit tests that are not traceable to customer requirements are kept segregated from functional tests. Next, if a QA team or individual is part of the process, I've worked in projects both with and without test teams, they need to be engaged from the start of development, and work closely with developers. Developers write tests, even if a QA or test engineer is involved. And lastly, there is not a separation of QA and dev tests. They all go into the same functional tests suite bucket. That's actually kind of a preview of what I consider a lean test first or a lean test driven development. Ok, that's the set-up. So if we're doing that, what are our benefits? First off, the business related, practical benefits of testing. You know actually, before I start this I'm going to note that I just thought I'd have like a handful of these things then I started writing them down and, man, there's a lot here. Ok, so business-related, practical benefits. You want to find the bugs before the users do. Next, find physical and computational errors in your software. You want to improve the external quality of the software. Make sure the software meets the user requirements, and technical specifications, and make sure that the software works as expected. Safeguard critical customer functionality, certain process speed requirements etc, things that are really important to the customer, or a particular customer that happens to be a huge customer maybe. Make sure their stuff really works. Ensure consistent and reliable results and data between releases. That increases customer confidence in upgrading to new releases. This allows you to increase your agility without sacrificing quality. People make decisions based on your software and the data that your software produces. So the software has to be trusted, or else those decisions are not trustable. Data reliability and behaviour reliability increase in importance with risk of failure. Well, what does that mean? There is some situations, like healthcare, air traffic control, where lives are at stake. That sort of stuff is really really important. You've got to make sure that's rock solid. But, you know, if a certain brand of printer doesn't work with your software, it's not the end of the world. Ok. Next, we are still in the business related practical benefits, I haven't even got to the fun stuff yet. So, next up, better requirements and system specs. So, if you develop the tests while you are fleshing out the requirements and specs, then you can trace the test cases to the user requirements and specs directly. Testing writing can help define requirements and specs. They highlight inconsistencies early, where they are cheaper to fix, and actually that act of writing tests helps clarify how the software should perform, and it helps you define the requirements better. Well that brings me to better API design. If you are designing the API at the same time you're developing the tests, you can better see the system from the user's perspective. People become more intimately familiar with the user requirements, the specification and how the user can use the system. A better understanding of the range of inputs, and which functionalities are interdependent, and that also brings me to showing unintended functionality dependencies early. You will have more opportunity to make a more usable API early in the development cycle when you are writing tests against the API early, before the functionality is actually there, before the implementation, then, if it's clunky and you're like, man I just hate using this API, you can change it so that it is a pleasure to use, because your users are going to -- your customers are going to have to use it, so make it nice. You can make sure extreme boundary cases are thought about and designed into the system early, and edge case questions can be answered quicker actually, and with less hit to productivity, because the decision makers that came up with those requirements and functionality requirements, they just got done thinking about it, so they are more able to answer questions about boundary and edge cases. Ok, another benefit is a reduced cost of producing software. I think that's definitely true. You're going to reduce the costs of producing software, you reduce the need for manual testing for one, manual testing is a price you pay at every release, and only increases with increased feature sets. Automated tests can be run way faster for each release. Tests written early can be reused during the development to speed development. Developer testing always happens, whether its automated or not. If it's automated, it happens faster and more consistently. This is speeds up the dev-test cycle. If it's manual, then it just is slow. Tests for each release can be reused for future releases. More time can be spent on new code vs fixing old code. You reduce the time you need for final QA portion of the release cycle, that reduces costs. An interesting thought is you reduce the cost of failed projects. You can find out faster if your software or hardware or kernel architecture even allows the new functionality. And for non-failed projects, which I hope that's all of yours, faster feedback of correct implementation, and faster delivery Of prototypes and tracer bullet implementations. Reduced need for independent testing. Now, you know, this also depends on the risk. There's certain areas where you really are going to want independent testing, but it's going to be a lot easier if you've already mostly tested the system. Adding a new feature or bug fix, it's going to reduce the time it takes to get that fix in the code, and off to shippable code. Changes go through the system faster. Customers fixes can get released to the customer faster. This allows for quick releases to customers at higher quality levels. You're going to have less bloated releases. This might not be obvious to you, but it happens, if there's somebody that's holding up a release, and it gets pushed out, more features get shoved in. Doesn't make sense that you would have more features into an already slipped schedule, but it happens. So, if you can avoid slipping schedules you would have more features into an already slipped schedule, but it happens. So, if you can avoid slipping schedules, you get less bloated releases. Ok, so I probably forgot some, those covered I think the business benefits of this way of doing testing and software testing. Right now I want to talk about something that I think is often left out of the discussion, and it's personal reasons to embrace testing. I think it increases your job satisfaction. You get less doubt in your own code, less doubt in your teammates’ code, so your teams just sort of work better together. And don't forget those that cover your butt aspect. Increase confidence in your own code, and your ability to change something without breaking something else, or breaking customer process. I mean, who wants to have that? You get a call from like your manager, or your manager’s manager saying hey, you know bigwig customer X just found a bug in your code. Man, that stressful. So, putting tests in place so that doesn't happen, so you have fewer late nights and weekends fixing bugs. And, that crazy crunch period before releases, I think it's less crazy if you've got a really good test suite. Ok, my favourite, the pragmatic day to day developer benefits of testing. First, increased productivity. You get more done. You find bugs earlier, makes them cheaper to fix. Preventing bugs from happening at all is way cheaper than fixing them. So you can focus on new stuff and not fixing old stuff. Having tests in place also allows you to focus on one piece of functionality at a time. You don't have to keep thinking of whether or not to you are breaking other stuff, you've got tests in place to make sure you don't. Fix bugs once, instead of continually fixing them over and over again. Error handling is something that just happens. You know, you want to be able to capture invalid input from users and do that consistently, that's hard to do if you do that from the bottom up, but if you design the API first, with tests, it's easier to consistently handle errors up front, to find where the error handling is going to be implemented in your design, and then that relaxes the error handling requirements for most of the subsystems. You can handle it closer to the user API. This allows for smaller code units that don't have to deal with edge cases that are never going to be there. Tests define the finish line for developers, especially in the like good case. I mean, occasionally it happens that the developer will write some pretty darn good code the first time, but they won't believe it, they will sit there and test on it for a day because that never happened to them before. So, if you've got a defined finish line, developers can trust themselves more, and say oh, yes, cool, it's done, move on. It builds developer confidence, in their software, in their co-workers, and in their own accomplishments. Problems are found closer to the time that they are created. This helps in debugging. Having a robust test suite allows for refactoring and rewriting. Oh, one other cool aspect is it allows for incremental integration of multiple components. What does that mean? So you've got like components coming in from several different groups maybe, or at least several different people. It's a really good idea to test each of those one at a time. Bring one chunk of code in, integrate it, run all of the tests against it, bring another one in, run all of the tests against it, bring --just keep doing that incrementally. That's really slow if you've got a bunch of manual tests. If you've got automated tests that run fairly quickly, at least a sanity test suite, that's a lot faster. Makes your life easier as a developer. Simple designs. One example that I've seen is using code coverage analysis tools during functional testing to show which parts of your code are over-designed and not even used. You can rip that stuff out. That's my best -- my favourite use of code coverage tools is not to tell you where to test more, but to tell you where to get rid of code. A good test strategy allows you to focus testing on critical functionality, error-prone functionality, dubious code and components. We've all got chunks that are like, man don’t touch that, it's frightening. Well, put some tests around it, and it will be less frightening and you can refactor it better, or at the very least you know it works even if nobody understands what it does. Testing helps fight the YAGNI battle for you. The -- you ain’t gonna need it. Developers have a tendency to stick, I might need it later functionality into every function, or what if this edge case happens. Well you can battle that with functional API tests to say you can't get there. If you can't get to that edge case, then your code doesn't have to deal with it. Also helps you avoid writing code that doesn't add business value. If you build your tests from the requirements and from the specs, it's hard to write extra I might need it later code, or at least your teammates will hopefully help stop you from doing that. So I'm hoping this leads to better design. I think it does. This is a hard one to test, but what do I mean by better design, and how does testing help that? Well, testing forces you to pay more attention to interfaces, and the bigger interface you have, the more you've got to test it. So that's just naturally keeps your interfaces leaner, more usable, and obvious. What do I mean by obvious? Well, it's just shouldn't be surprising. If you’re incorporating testing early in the API design, that will help. Reduces code bloat and scope creep. I kind of talked about that before with YAGNI. It produces leaner code. Less code is less cumbersome, it's easier to understand. Rewrites and refactors are less risky. Reduced complexity of the codebase. And there's this notion that there's a psychological bias about what is possible and how it should be implemented and how it should be tested, and there's a problem with independent testing vs implementation testing. you kind of take that out of the equation if you test first. So, it's not completely out of the equation, but if you write the tests first, before you wrote the implementation, then you are seeing it from the users perspectives and what could go wrong in corner cases, and then do the implementation. Another benefit is the increased developer understanding of the whole system. If they’re looking at all the tests from the API, it's easier to understand it from the customer’s perspective, and of the whole system. Brings the developers closer to the understanding of the customer, and the customer requirements. I think that focusing on testing sharpens critical thinking skills. It sharpens big picture thinking, design skills, and cost-benefit analysis skills. It also increases the team ownership of the entire code base. Less of the - that's not my code, if it breaks it's not my fault. I hate hearing that. It protects from old bugs coming back. This happens. You know, you've got some Frankenstein code that you look at, but it’s there for a reason. It probably fixes a customer problem or a critical corner case. So someone comes through and "cleans it up", the design, but then the bug comes back that the Frankenstein code was there to fix. So, incorporating testing in that whole thing, when you put the Frankenstein code in in the first place, or put tests around the Frankenstein code before you refactor it, well then you can make sure the code is refactored and rewritten such that the fixes that the Frankenstein code was there to fix stay fixed. Kind of convoluted, but it happens all the time. The supplement of other forms of documentation. I've heard this before, people saying tests supplement documentation and I don't think it replaces documentation, but there are some discussions about requirements that make it into the tests but do not make it into the other documents. So, yeah, supplements the other forms. I think also faster onboarding of new developers. They are less scared to modify the system and they can see from their tests what happens when they do modify the system. Code reviews can focus on style, design, group coding standards etc, and not on correctness. Now test code reviews are great to focus on correctness, but they can be focussed on the correctness of functionality without looking at the implementation. And getting the whole team involved in test code reviews really increase the understanding of what everyone on the team is working on. I think that's -- probably missed some stuff but, that's my list of the benefits of software testing. I didn't even count them, I have no idea how many are there. Perhaps in the future podcast I will discuss which of those benefits you lose if you remove some of the assumptions I made about your development process. This is already getting long, so I think I'm going to wrap things up. [music] Ok, I want to talk a little bit about the Patreon campaign. I want to make this the best quality podcast I can, and you can help. I started the podcast so that I could help more people than just the writing. And if I can reduce the time and expense that takes to produce each episode, I can create more episodes. I'd like to be able to produce two, maybe at max three episodes per week. But, you know, as you notice from last week, sometimes I don't even get one out. Last week was a little bit of a special exception. My daughter's birthday was kind of a big deal. She turned six and we kind of wanted to make a big deal out of it, took the day off it was fun. Anyway, I'd like to do more of them. I've gotten some listener feedback about a particular topic she's like me to cover. I've also received requests for extras such as transcripts. These would take a lot of time for me to produce, and money if I hire somebody to do them, but I'd like to. I'd like to hire help to be able to get more episodes out to you faster, someone to write transcripts would be great, and someone to do audio editing and a little bit of cleanup would save quite a bit of time actually. Until now, the podcasts have been supported through books sales, at pythontesting.net/book. I love those sales still, but that isn't sustainable. I mean, you're not going to buy more than one book right? And listener -- I think the listener crowdfunding models seems pretty cool. It allows a carrot model to encourage me to produce more episodes, and it allows for patron goodies, Patreon supporters to get maybe supporter only content. If every listener donated a buck per episode, I's have creative freedom to deliver great quality products with a reduced cycle time. I'd also not need to seek sponsorship, but even below that goal I'd like to have a balance between listener support and sponsorships. I want my listeners to have a sense of ownership, and get early access to great information. I truly appreciate anything that to you can donate to this, no matter what the size. And I've got a few carrots for you. So, before we get into that, I don't want to you to donate if you can't afford it, and please be sure to enter a cap. The Patreon process has this way you can say how much you want to donate per episode, but you can cap it per month, and I think that's a great thing because people know, oh yeah, I want to give you more to try and get you to do more shows, but I can only afford this much per month. Make sure that you do that, so that even if I go hog wild and produce gobs of episodes which probably won't happen but, then you won't get charged more than you are comfortable with. So, a dollar pledge, I'll just be really grateful, and that would be cool, but for $2 or more, I'm going to give you early access to transcripts, once I get them made, some special audio recordings, and maybe early notes on upcoming shows. I'm going to wrap things up. You can catch me out on Twitter @brianokken, the show is @testpodcast. Please keep the feedback coming, I'd like to know what you'd like to talk about next.