Posts by obqateam:
- Design of new Smoke Test
- Test cases review
- New flows review
- New suites organization
- New test cases development
- Test cases review (part 2)
- Test case validation (with other teams)
- Design of new Automated Test
- Classes (foundation) review
- Diagram design (high level)
- Component design (low level)
- Deliverables testing
A new major release is coming! On March 31st, 2011 the Openbravo ERP 3.0 is planned to see light.
It will come with several changes both Functional and Technological. The new release will have tons of cool features and a huge GUI redesign is planned… and this is good news, isn’t it? Well…
My first thought when I saw the mockups for new layout was “Yikes! This will be hard to implement in Selenium“.
From a strict Quality Assurance point of view, new stuff means more potential problems. Even if considering only our Smoke Test forgetting the rest of the testing cycles we do, it is clear that most of the test cases will require hard work in order to update them. And from the Automation side, there are even more issues.
But a moment later, my mind changed to “Wait a minute! We can do this. It is a challenge and challenges are the actual motivation for the team“. These mixed feelings are caused by the difficulties we expect to have along the new Automated Test development. Test cases do not playing the proper test steps and things like that.
However, those are actually the things that help us (to both QA Team as a unit and Openbravo as a company) to improve previous processes.
So the objective is to develop a fully functional Smoke Test and automate it to run against Openbravo ERP 3.0. In order to achieve the goal, we have identified several tasks.
As we said in previous posts our expertise in Selenium is increasing rapidly. We are currently able to perform several types of tests and make reliable and fast test suites.
There are two main development lines that run in parallel, the Functional branch is intended to develop the test cases in Testlink according to the new flows that will be delivered in 3.0. Partially depending on this Functional, there is the Automated branch that will focus on Selenium issues.
One of the most evident features in 3.0 is the GUI redesign, and that means a direct hit in our foundation classes for automated tests. Furthermore, the use of a framework as Smartclient is a major drawback since from a plain HTML perspective the application is completely different from the 2.50 series.
As a conclusion, this post’s title should be changed to “Openbravo 3.0: What a great moment to develop more Selenium stuff!“
We were working using Selenium for several years now, and our experience is great. Selenium is flexible, powerful and easy to learn.
We have the Smoke Test running as a required step for code promotion in Continuous Integration release cycle.
However, we are looking for moving the Selenium tests earlier in the ERP’s development cycle. Here we face one of the biggest drawbacks of our tests: They were developed thinking in a full execution starting from scratch.
That is, test cases depend of successful run of previous test cases in order to execute. A first snapshot of these dependencies appeared when we worked in parallel execution .
Now the following scenario. I am a developer and I made a fix that could cause some instability in Production module. I should not make the commit and wait CI’s email about the result, not to say waiting for QA’s standard testing cycle, nor a customer after the fix was included in some release.
Release Management team has done a very cool feature, named “Try” and we would like to add something else. Instead of executing an “all-purpose” test, we look for specifics suites.
So, in the example, even if we know that only Production module could be affected by the change, current available test would only allow a full run. Could the Production Smoke suite be executed separately? Well, yes. We added some extra capabilities to our suites by combining standard Selenium scripts with DBUnit scripts. The concept is quite simple, as explained in the DBunit’s web page:
DBUnit is a JUnit extension targeted at database-driven projects that, among other things, puts your database into a known state between test runs.
A DBUnit script will be executed before launching the Selenium script, and it will do the required changes in order to fulfill the Selenium script preconditions. Then, the test case is executed as usual.
DBunit data is created using XML files containing the rows that will be used by the next script. This has a complicated issue. The DBUnit part should be created in a safe way, meaning no interferences with current data are allowed. So static XML files were not enough. We created tags where dynamic data was required and then made that dynamic data parameters so we could use them as part of the next tests.
The result of this work can be seen at pi-dbunit branch . Currently, there is only an small set of tests available, and that is because generating the XML files is a hard task, requiring deep knowledge of Openbravo’s DB structure, table names, triggers and constraints.
Our goal is to have a full set of tests for every major module in the ERP, making easier to make specific testing in a very fast way.
Just for info: A full DBUnit+Selenium execution in Production suite could take ten to fifteen minutes. Currently, it takes ten minutes… but you have to execute one hour of tests before getting there.
For several weeks we were working on infrastructure improvements. In order to make more reliable our Department’s processes, and now that parallel execution is ready to run, we also automated the test of our own code.
One of the most critical processes in our automation cycle is the testing of the code we deliver. We work in several branches, and we have to assure a proper integration in order to execute Hudson‘s ERP-CI jobs in a reliable way.
Our goal was to use the same approach the ERP has. All in all, Automation and the ERP are both development projects, so what has proved to work great for the ERP has to be also good for Automation. So we mounted our test contexts in Hudson. The infrastructure is a simplified version of current ERP structure. We use to develop several branches, as many as our projects require. Smoke Test is the project, but there are other projects as well.
Name assignment for the branches was not easy. We got three levels of branches: Development, Product Integration and Stable. And the stable branch was required to run either in ERP’s PI and Main branches.
Tagging revisions for selecting the version that will run with PI and for Main would be the most logical choice. However, it would be inefficient. Since ERP branches for PI and Main are different, Main branch is updated in bulk (when Continous Integration tests passed) or through individual transplants. That means that, potentially, a transplant could change the behavior of the ERP, making the tag for the Automation useless.
So, we choose to use two branches to match the ERP layout. And for easy understanding, stable branch that is executed against Main is named Main, and the one for PI, PI.
That lead us to the next problem. Since we took the PI name for one stable branch, another name has to be chosen for the Product Integration branch. And we decided to use “int” (for Integration).
Finally, development branches have a simple naming convention: pi-* (i.e. pi-smoke, pi-regression, pi-localization, and so on)
All these branches are periodically synchronized using the Integration branch (int) as a hub. When Integration branch is considered stable, code is promoted to PI branch.
Once there, PI branch’s control is taken by Release Management team, in order to assure that proper Automation version is executed with any given ERP version. The Automation PI branch is considered stable, and it is used to test an ERP PI branch. If test passes, ERP code is promoted to Main branch, and Automation Main is updated to that version of Automation PI. That means that an specific version of the stable automation code is “frozen” so it can be executed successfully as many times as required.
If a new version of the ERP (in PI branch) requires testing, Automation PI will be used. And if ERP behavior changed for some expected fix, a change in automation could be developed and promoted to PI without changing the code in Main.
It could happen also a more complex scenario. When QA team is testing a Maintenance Pack candidate, that is last Main branch revision, could happen that a change were required (i.e. a defect was not properly fixed) triggering a transplant. The developer push to ERP PI a new changeset for fixing the issue and Release Management team transplant it to Main branch.
In that case, the automation code will remain the same, since it is expected that ERP behavior will remain unchanged. However, there is a small chance that the fix changed the behavior on purpose. So a fix in the automation branch should be pushed and then transplanted as well to Main branch, allowing a successful execution of the changed Automated Test.
At this point, we got a huge improvement by automating part of the deploy cycle. In next weeks we will add another cool feature (also inspired in current ERP’s process), automatic code promotion. The plan is to have a deamon monitoring the Integration branch. Whenever it detects a commit coming from any of the development branches, it will run a series of tests. If all of them succeed, code will be considered stable and code will be automatically promoted to stable PI branch.
<target name="test.integration.smoke"><sequential><antcall target="test.integration.erp.testsuites.smoke.masterdata"/><antcall target="test.integration.erp.testsuites.smoke.accountingdata" /><parallel><sequential><antcall target="test.integration.erp.testsuites.smoke.financialdata" /><parallel><sequential><antcall target="test.integration.erp.testsuites.smoke.procurement" /><antcall target="test.integration.erp.testsuites.smoke.sales" /><antcall target="test.integration.erp.testsuites.smoke.projectandservice" /><parallel><antcall target="test.integration.erp.testsuites.smoke.production" /><antcall target="test.integration.erp.testsuites.smoke.accountingprocess" /></parallel></sequential></parallel></sequential><antcall target="test.integration.erp.testsuites.smoke.assets" /></parallel></sequential></target>A graphical view of this change is shown in below picture.
One of our team members has made a blog post about the recent changes in our test code. We hope this new structure will make coding easier.
As we announced last week, we held a webinar showing our vision of Selenium automation and Openbravo ERP.
The session was recorded, so if you want to take a look, it is available here.
Hi all, We are happy to announce next Openbravo Webinar, Selenium automated testing in Openbravo ERP.
It will be held on next April 8th from 16h to 17h (CET).
The session will be time-boxed to one hour with the following agenda:
- Overview: Automated integration testing
- Automating test cases in Openbravo
- How to create a test case
You can join the session through this link
The session is open to everybody but limited to one hundred (100) attendees. Attendees will be accepted from 15:45 (CET). The session will be recorded and published in Openbravo wiki. After the session it will be communicated through this forum where these resources are available. However we recommend you to attend the online session because you will have the opportunity to ask and chat with the Openbravo staff involved in the module development presented in the webinar.
If you have never used Adobe Connect Pro you should test your connection in advance: http://openbravo.emea.acrobat.com/common/help/en/support/meeting_test.htm
We look forward to meeting you in the session!
Very often, people is using the term “QA” for grouping so many disciplines that the very concept of Quality Assurance has become something difficult to describe.
In a previous post, we were trying to describe what Quality is. If the concept of Quality is so diffuse, it is not surprising that the discipline that must assure it was also diffuse.
Quality Assurance or Quality Control?
No, we are not going to start a new QA vs QC debate here. We will try just to put some borderline that works with our objectives of improving Quality.
Software Development is a complex process. Generally speaking, you could say that is like if process of building a house started by designing and making plaster panels, cutting down the trees to make the wood you will need, and so on. And, as an industry, it is not mature enough to be fully reliable. Would you buy a car with a sticker in the steering wheel saying:
ACarForYou ltd. do not represent or warrant to you that:
(a) your use of this car will meet your requirements,
(b) your use of this car will be uninterrupted, timely, secure or free from accidents,
(c) any information obtained by you as a result of your use of this car will be accurate or reliable, and
(d) that defects in the components provided to you as part of this car will be corrected.
However, Software industry makes millions while including a text like that in the EULAs.
A well established discipline in other industries like Quality Assurance is, in Software, a matter of opinion. But there are some basics to work with.
Nowadays, every development task has some kind of inherent quality process. There are spell checkers, autocompletion abilities, and other useful stuff. Even a basic task like compiling will take care of a number of issues.
A well-trained developer is also able to run the code (s)he just wrote to test a kind of flows and check that after fixing the compilation-time errors, the code actually do anything.
We cannot say this tasks do nothing for improving code’s quality, although we will say it is not Quality Assurance nor Quality Control. It is part of Coding Phase. Does it mean that not Quality Assurance exist during Coding Phase? Of course not.
We, as a Quality Assurance department, believe that no specific task adds quality per se. Peer reviews are not suitable for most of the projects. They are expensive since most valuable resources (experienced developers) are the bottleneck. And may be Vim is just what meet developer’s requirements about an IDE, so installing and fine-tuning Eclipse is basically a cool way to waste time.
So, the major goal is to find the perfect fit. A set of processes, tools and disciplines that maximize the quality of the developed code.
Continuing this set of posts we will go deep into some processes, like Black Box Testing (including automation), Unit Testing and User Acceptance Testing.
We will analyze some tools that helps us, like Defect Tracking System, Test Cases Manager and Shared Virtual Machines.
We expect also to cover disciplines such as Agile Development, Professional Testing and Project Management among others.
In Software Development, quality is a must. Every company, from start-ups to market leaders, seek for deliver quality. But the real question is, what quality means?
Quality? What Quality?
According ISO, quality is:
“The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs”
And it is a very comfortable definition. Only problem is that cannot be used directly to measure the quality level of a working piece of software.
So, some extra work is required. First, I will remove the “or service” part, since I would like to talk about Development.
Business, Process and Product Requirements
For each of us dealing with Software Development, it will sound familiar to say that “stated or implied needs” is an abstract way of naming the Requirements.
So, simplifying the ISO statement, we could say that quality is:
“The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs Requirements”
Now is crystal clear, isn’t it? (If your answer is ‘yes’, please stop reading this post)
There are several classifications for the features and characteristic of a product, but I like the ISO 9126:1991 way:
- Functionality: A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs. (ISO 9126: 1991, 4.1)
- Reliability: A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time. (ISO 9126: 1991, 4.2)
- Usability: A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users. (ISO 9126: 1991, 4.3)
- Efficiency: A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions. (ISO 9126: 1991, 4.4)
- Maintainability: A set of attributes that bear on the effort needed to make specified modifications. (ISO 9126: 1991, 4.5)
- Portability: A set of attributes that bear on the ability of software to be transferred from one environment to another. (ISO 9126: 1991, 4.6)
What is Quality all about?
Drilling down into the definition of quality until something meaningful happened is far from easy. And the main issue about it is due to the fact that quality is a perception. One single attribute cannot define quality by itself. Deliverance of zero-bug software is a major objective but it cannot be isolated from the rest. A Java function doing nothing will accomplish the goal.
The idea then is to gather as much information from different sources, seeking for a comprehensive list of requirements to fulfill. And, after that, negotiating priorities according Company strategic goals will lead us to the next level.
In next posts we will start the process that (hopefully) will significantly improve current quality level
After surveying the cloud of automation tools, we choose Selenium for a number of reasons:
It allows to use either Mozilla Firefox (versions 2 and 3) or Microsoft Internet Explorer (version 6). Other tools like Watir and Watij work with IE only (although Watir has
Firewatir, a mechanism to drive Firefox).
Last but not least, it’s open source software.
Do you like automation? Try it!
Now the whole community can access the automation code branches for 2.3x and 2.40 stable branches. A tag for testing 2.40 community edition is also available, and the current development Main is available as well. For more information you can visit the Project Page at our Forge.
If you want to know more about the process, you may check Automation main page in our wiki. There are also a lot of useful pages grouped in an Automation Category.
If you are interested in running the scripts, you may check this wiki page. We encourage you to use automation to check stability of any Openbravo ERP version you have.
About Automated Software Testing
A key process for any QA process is reliable automation, so virtually a continuous quality assurance cycle is inserted to find any stability issue on early stages. In Openbravo’s QA team, this is performed with a combination of Java, Selenium and Ant. Java and Selenium are used for user interface web-based functional testing. Other Java processes execute queries to the database to ensure correct non UI observable results. And by using Ant, all this processes are linked among them as well as added to daily build tasks.
The most important aspect of UI based automation is the high volatility of the resultant scripts. Simple changes that real users may not notice, like changes in button HTML identifiers, can broke an automated test. Also, functional changes are part of the normal development, like a new requirement of adding a new mandatory field on a form.
A failed script execution on a daily build, fires a maintenance tasks for automated scripts. Since new functionalities are not included on automation, only changes to current functionality or unexpected behavior can affect Smoke Test. In the latter case, a bug should be fed. If the change is because a planned modification, both online documentation and automated scripts are updated and the Smoke Test is run again.
This version of the scripts are somewhat basic regarding dynamic execution. That means that test cases must be executed from first to last, since previous generated data is a precondition. For example, you may note that create a Sales Order requires to have a specific Customer, as well as a specific Product. Dynamic scripts are under development right now, which will allow to decouple modules to fit any given data.
We are currently enlarging the scope of automation for trunk version. Contributors are welcomed. If you have some knowledge on QA processes, automation and programming skills, contact us at automation _at_ openbravo _dot_ com
Additionally, you may want to develop your own scripts to verify custom code. We will gladly help you if you contact us at our Automation forum at the Forge.