Lessons Learned from Software Development

all right so I'm Vince Coppola and I'm gonna be talking about lessons learned from software development so you would want software development to be very straightforward process you would start with some identified need you would write some requirements you would take that and make a design you would take that and develop tests against it until you could deliver by doing some acceptance testing and then some maintenance at the end however when you look at the identified need often the customers understand that only at a high level they don't really have a real complete picture often they talk about improving what they currently do to do better than they were they're doing now but not really specific about what the betterment really is that gets translated into requirements that may not be all that clear certainly not that complete but you have to start somewhere so the design picks up from that assuming anything that hasn't been talked about yet then the developers get the code they have to code something so they'll code something they'll also make assumptions and eventually you'll get to the point where you can actually test the software and when you get to the tests that's when things usually go south on you so the development process can sometimes look like this process where you would think that the development process is a lot about code but it's actually a lot about communication about what did you want something to do and did you actually complete something develop something that is actually useful for the thing that you really wanted to have happen so there's there's several difficulties in the water fell in the waterfall development approach creating the clear and incomplete requirements is really difficult it's the hardest part of the process is just saying what it is you really want often you don't just really know way up front of the process what the developers do is they overcome that ambiguity and what the requirements are by making assumptions because they're going to develop code that's what they do they will make good assumptions and they will make bad assumptions it depends on what their domain expertise is and what the real customer is trying to get to bad assumptions lead to failures during testing now don't think that the developers don't test developers always test but they make unit tests up and unit tests are designed to always pass that's their whole purpose they demonstrate that the developer created their component correctly and it does what it's supposed to do which is great but what they don't do is they don't take my component his component her component and nine other components and put them all together and see if the system actually behaves like it's supposed to so the development testing is not done by the development testing is done by developers but that's not system level testing that's not testing whether you're actually achieving any goals so people have gone away from doing waterfall design and waterfall development and instead they use an incremental approach and the most famous incremental approach is called agile but actually there's 14 different variations of agile they all have different names are all slightly different that kind of process thing is a detail and it's it's that's not the important part the important part of doing better than the waterfall approach is to do an incremental approach and that is to do something first get something to test test something realistically so that then you can understand whether you're missing something or what and then take that cycle improve your requirements and go again so the keys to doing that incremental approach is not getting better requirements because that's the hardest part of the whole thing that's the part you're actually trying to overcome is that no one can really write clear incomplete requirements it's like high school all over again it's not a better design either that's not a key to the process because you can't evaluate a design without actually doing something testing actually will evaluate the design that will help you but you can't know a better design ahead of time so really the key is representative testing using the software sort of like its intended and then while you're doing the representative testing under the results so testing is a key you're gonna do a test every cycle you're gonna have successes and failures but you're also gonna identify things you wish you had those are gonna be missing requirements things that you wished you had that are not coming this cycle be coming next cool you at least accounted for those you're also going to find out we're also going to have code that's tested early and often because every cycle you can repeat the same test you did last time and see if you've improved or been the same as before so testing is a key but it's not unit testing unit testing is not a key because unit tests always pass it's the system level test that's what you really need an assistant level test is representative you want to do something that is sort of like what the software is supposed to do using a workflow or test data or outputs things that are actually give you some meaningful actionable evaluation what did the test actually show you so that you can use that information in the next part of the cycle so let me tell you about our experience that we had where we actually were part of a big system there were some requirements we built to the requirements we arrived at the test and we didn't do well in testing and how could we not do well in testing we wondered and that was because we actually are they actually tested to the CONOPS that they were trying to retrieve they were trying to achieve some representative use cases and they were trying to represent they're caught up s– and run the software that way which was terrific a great idea the sad part about it is the people who wrote the requirements had not talked to the CONOPS people so the requirements didn't actually satisfy the CONOPS so software that was actually developed to the requirements didn't actually pass the testing it would have been great had we done more of an incremental approach and had found that information out earlier rather than later in the process so representative testing is the key but it's also understanding the results how do you understand the results well vehicle mission modeling is complicated it's time dynamic multi-domain lots of technical computing Asians an Excel spreadsheet is a way to understand information but doesn't communicate it very well 3d visualization we have found is a great way to understand what's going on so part of understanding the results could be things like graphs and animations and pictures that help you understand what the test is trying to do and then what the results mean so that's great the key to the cycle is doing representative tests and understanding the results where can you get representative tests well if you're replacing old code you probably already have legacy data and have a legacy results so if you're replacing something that's sort of already there you're probably fine you probably have something to start with but if you're doing a new capability that's probably not true not many people publish a lot of results with a lot of test data so that you can just readily get it on the internet it's just not one of those things so you probably are gonna have to generate the test data yourself so what you need is a test data generation tool that's what you really need and what would a test data generation tool be it would be something that allowed you to do high fidelity tests or low fidelity tests depending on what you needed you'd want to be able to compute truth data out of it you'd want to be able to report your inputs and output so you can keep track of that as test data hey if the tool actually allows you to graph you could understand the results as you're creating the test that would be very useful for you and creating what the tests were your ability to save the test data configuration and then load it later is sort of a must because if I have to start over from scratch that's a really it's an awful way to proceed if I have to I will but I'd rather just be able to save my test data and be able to rerun the test I'd like to easily recompute when things were changed that'd be great an automation interface I'd like to be able to script it so I can turn all of my tests into regression tests so the next cycle I can repeat the same thing I just did hey a big plus if I could visually visualize what the test data said that would be useful for us because then we could understand what tests we were preparing so I guess you're gonna have to write your own test tool which happens to be a whole other development effort one you don't usually get paid for so okay you're gonna do it you have to do it make your own test roll up we do at the office sometimes too it's what we have to do how do you test the test tool you're kind of back at the same problem you just had so what are you gonna do back back at home what we have done is we've had to make our own tests and we dinner generate a test tool and run through things too and we have situations where well we don't spend that much time on generating the test tool itself we spend a lot of time making sure the delivered code is good making the test code good is not one of the essential things so often we find that there are times when we find bugs in the test code not bugs in the delivered code and we all hate that but what are you gonna do well maybe what we could do is see if there's another tool out there that can be a test data generation tool we think SDK can be a test case development tool SDK does modeling a simulation of the vehicle motion it does high fidelity to low fidelity it can do all kinds of multi-domain mission planning from mission concepts through operations it can do all kinds of missions RF link analysis station-keeping constellation design collection planning you can kind of make up what you want matter of fact you're not canned missions you design the mission to achieve what you're supposed to achieve you make up a realistic representative test case that you can use to generate test data for your development effort will use realistic models and behaviors so you can have vehicles that have different types and behave like they're supposed to they'll have instruments on board that behave like they're supposed to including even pointing behavior and of course STK has a 3d visualization component to it so not only do you get the test data you can understand the results because you can see what's happening you can see what the tests were supposed to do you can see what the answers turned out to be that the mission that you see and DK is not just a pretty picture it's representative of the numbers the numbers tell us what the animation show you get the pictures out and animations out you can understand the results and of course there are people who would say well but I really want the pretty picture because I'm trying to communicate to an audience that doesn't really understand as much and of course we can do the pretty pictures as well so for more information you can go to AGI calm thank you

Leave a Reply

Your email address will not be published. Required fields are marked *