Sunday, March 15, 2015

Testing Winforms with Python




Testing Winforms with Python
by  Maksim Kozyarchuk





Overview

Classical approach to automated testing of WinForms applications typically involve reliance on testing tools such as WinRunner that allow a user to record and playback user interface (UI) interactions as test scripts.  This approach, while seemingly easy to get started with, tends to produce very fragile tests that break when UI configuration changes in even the minor ways.  Furthermore, understanding and maintenance of produced tests scripts require fairly sophisticated tooling that have a significant learning curve.   A number of alternative frameworks have been developed lately such as Quail and NUnitForms that solve most of the issues with WinRunner style tests, but they require that QA Engineers write tests in C# code.  In this post, I will extend the techniques for launching Winforms applications from a python program discussed in the previous post to a simple testing framework, which will allow a simple text and python interface manipulating and validating behaviour of Windows Applications.


Launching Winforms application from Python

When building a test runner we cannot use Application.Run() method as it takes control of the primary thread, which blocks us from executing a test scenario.  Instead, we can replicate behaviour of Application.Run() within python by calling Show() method of a form class and calling Application.DoEvents()
import clr
from System.Reflection import Assembly
from System.Windows.Forms import Application
Assembly.LoadFile(r'c:\work\simple_form\SimpleForm.DLL')
import SimpleForm

SimpleForm.SimpleForm().Show()
while True:
   Application.DoEvents()

 As we will see later, we will not need to call DoEvents() continuously, just at the points where some asynchronous activity is expected.  In fact, this will also give us control to test asynchronous behaviour.


Building a Test

    A typical application test, will start with launching the application, then performing a certain set of setup actions, following by a certain set of validation activities and finally closing the form.   To demonstrate how this can be done through python, I’ve created another simple form called FormForTest which has From text box, a To text box and a copy button which copies the values from the From text box to the To text box. We can now build a python program to test behaviour of our form.  
import clr
from System.Reflection import Assembly
Assembly.LoadFile(r'c:\work\simple_form\FormForTest.DLL')
from FormForTest import FormForTest
context  = {}

class LaunchForm:
   def __init__(self, name):
       self.name = name
   def execute(self):
       context["ActiveForm"] = globals()[self.name]()
       context["ActiveForm"].Show()

class BaseControlFixture:
   def __init__(self, name):
       self.name = name
   def get_control(self):
       for control in context["ActiveForm"].Controls:
           if control.Name == self.name:
               return control
       raise NameError( "Field %s is not found" % self.name )
   
class SetField(BaseControlFixture):
   def __init__(self, name, value):
       super().__init__(name)
       self.value = value
   def execute(self):
       self.get_control().Text = self.value

class ClickButton(BaseControlFixture):
   def execute(self):
       self.get_control().PerformClick()
       
class GetFieldValue(BaseControlFixture):
   def execute(self):
       return self.get_control().Text

def run_step(fixture, expect = None):
   assert fixture.execute() == expect

run_step( LaunchForm("FormForTest") )
run_step( SetField("txt_from", "abc") )
run_step( GetFieldValue("txt_to"), expect = "" )
run_step( ClickButton("btn_copy") )
run_step( GetFieldValue("txt_to"), expect = "abc" )

print( "Test Completed Successfully")



Five statements closer to the bottom of the script define the actual test while the rest of the script is the Testing framework


Defining a Testing Framework

To improve readability and maintainability of tests we should separate test definition from the test framework, and create a reporting mechanism which is easy on the eyes and helps identify exactly where the error occurred.  We can create a simple grammar that will define our tests centered around the fixture notion defined in previous section.  Grammar for above test could look as follows.


fixture: LaunchForm
param:name=FormForTest

fixture: SetField
param:name=txt_from
param:value=abc

fixture: GetFieldValue
param:name = txt_to
expect: abc

fixture: ClickButton
param:name = btn_copy

fixture: GetFieldValue
param:name = txt_to
expect: abc

A test runner that would run a test written in this grammar and produce a simple html output is below. This runner makes the assumption that framework.py defines all of the relevant fixtures. This can be cleaned up a little bit by defining the notion of namespace for fixtures.

import argparse
import os, sys
from framework import *
from pyparsing import Literal, Word, Group, ZeroOrMore, CaselessLiteral,srange
from jinja2 import Template

class StepRunner:
   def __init__(self, fixture, params, expect = None ):
       self.fixture = fixture
       self.expect = expect
       self.params = dict([(k,v) for k,v in params])
       self.formatted_params = ", ".join("%s=%s" % (k,v) for k,v in params)
       self.result = "Not Run"
       self.fail = False

   def run(self):
       runnable = globals()[self.fixture](**self.params)
       try:
           result = runnable.execute()
           if result == self.expect:
               if self.expect is None:
                   self.result = "PASS"
               else:
                   self.result = result
           else:
               self.fail = True
               self.result = "ERROR:%s  != %s" % (self.expect, result)
       except Exception as err:
           self.fail = True
           self.result = str(err)
   
def parse_file(file_name):
   Identifier =  Word(srange("[a-zA-Z0-9_]"))
   fixture_line = (CaselessLiteral("fixture:").suppress() +Identifier)
   param_line   = CaselessLiteral("param:").suppress() + Identifier + Literal("=").suppress() +Identifier
   expect_line = (CaselessLiteral("expect:").suppress() + Identifier )
   fixture = fixture_line+ Group(ZeroOrMore(Group(param_line) )) + ZeroOrMore(expect_line)
   grammar= ZeroOrMore(Group(fixture))
   
   steps = []
   with open(file_name) as f:
       for fixture in grammar.parseString(f.read()):
           steps.append(StepRunner( *fixture))
   return steps
           
def format_result(file_name, steps):
   template = Template("""<html>
   <header> <center><h1> Test: {{test_name}} </h1></center> </header>
   <body>
   <table  border="1" style="width:100%">
   <tr  bgcolor="#A8A8A8"><td>Fixture</td><td>Params</td><td>Result</td></tr>
   {% for step in steps %}
     {% if step.fail %}
     <tr bgcolor="red">
     {% else %}
      <tr bgcolor="#00FF00">
     {% endif %}
     <td>{{step.fixture}}</td><td>{{step.formatted_params}}</td><td>{{step.result}}</td></tr>      
   {% endfor %}
   </table></body></html>
   """)
   report_file_name = os.path.splitext(file_name)[0]+".html"
   with open(report_file_name, "w") as f:
       f.write(template.render(test_name=os.path.basename(file_name), steps=steps))
   print ("Saved results to %s" % report_file_name)
       
if __name__ == "__main__":
   parser = argparse.ArgumentParser(description='Run a test.')
   parser.add_argument('--file_name', dest='file_name', required = True)
   args = parser.parse_args()
   steps = parse_file( args.file_name)
   for step in steps:
       step.run()

   format_result(args.file_name, steps)
   if any(s.fail for s in steps):
       print( "Test Failed")
       sys.exit(1)
   else:
       print( "Test Completed Successfully")
       sys.exit(0)




Beyond  Winforms

To build effective acceptance tests for a Windows application, one often has to do a lot more than just interact with Windows screens. Data validation and setup often requires database queries and interaction with file system.   Having testing framework, built in python offers a big advantage to your QA team enabling them to create fixtures for additional setup and validation using a high level language with excellent library support.   Furthermore, Fixture abstraction can help significantly improve improve maintainability and readability of tests by:
  • Decoupling tests from the control names and types, enabling refactoring of screens without the need to update all of the impacted tests
  • Creating higher level fixtures to wrap frequently repeated steps.  i.e. LoginFixture or BookTrade fixture.
  • Introducing additional validation and logging logic when unexpected behaviour occurs within your windows application.  This can be invaluable when troubleshooting hard to reproduce issues




No comments: