Posts by priyam:
Database Model reference maintained for Openbravo ERP gives a complete overview of all tables and columns (and their relations).
- Tabular arrays are grouped in packages of functionally tied in entities; for illustration, all the tabular arrays linked to invoices are grouped in org.openbravo.theoretical account.common.bill
- For each one table - for example C_INVOICE - incorporates a full description of all of its columns
- Columns being part of a foreign key - for example C_BPARTNER_ID on C_INVOICE, is a normal column and it is foreign key column to C_BPartner table.
- Columns description - for instance PARTNER_ADDRESS on C_INVOICE, gives information about it in the column description
- Columns linked to a callout - for example TRANSACTION_DOCUMENT on C_INVOICE - provides a direct link to the java sourcecode implementing the callout.
One of the cool feature of this reference is the fact that it is an active written document that permits you to search the Openbravo datas model directly from the wiki, without the need to open additional tools such as PgAdmin.
This is a rattling valuable tool that can save hours of investigation and increase the productivity of many Openbravo developers. Take it for a spin.
This document on database model is updated automatically for every new MP. which is made possible by
- Openbravo ERP module org.openbravo.utility.modelwikidoc, which generates the entity model wiki page and its subpages.
- Continuous Integration, which checks for new MP (mercurial tag) and execute the wikidatamodel generation process
One can check updates in the data model using page history, example http://tinyurl.com/25pgvov
RM Updates : 2.50MP10 released, Quickstart-spain OBX automated, User manual to setup testing environment for openbravoERP, Scrum master less sprint.January 20th, 2010
MaintancePack Release: 2.50MP10 is released. OBX and the update through the MMC is available exclusively for professional customers. The appliance, tarball are available for everyone.
Quickstart-spain OBX: Created new job to build QuickStart-spain obx with this automation we are ensure better quality through all the development cycle, which leads to stable release dates and high quality in solution. Latest obx is available here
User manual: Created guide on how to replicate the Openbravo ERP running in a production environment into a testing environment.
Scrum master less sprint: We successfully made this sprint with no scrum master, which means we shared the work of Scrum Master and we have two people from the team on rotation to represent Scrum-of-Scrum and ScrumMaster's meeting.
Comments OffWe are working to achieve automatic promotion of "good" revisions to main. First of all, what do we mean with "good"? Right now we measure this as a set of builds and tests that are run successfully on a specific code revision of pi. When this happens then all the code up to that revision is selected to be promoted to the stable repository main.
- Main only includes stable code.
- Code is promoted from the development repositories to main as fast as possible. And as fast as the code quality permits it.
We have pi, main and Continuois Integration engine
- pi: The pre-integration repository. Developers push directly here.
- main: The stable repository. A manual merge is done before the release time.
- builds: Integrated with Hudson tool, that executes about 12 builds/test on main and pi.
This is how pi and main fit in our Continuous Integration engine:
- Developers commit locally and push their changes to pi.
- There are around 12 builds and tests that are polling the SCM for changes to the pi repository If there are changes the builds are run. If they fail developers are notified in the IRC channel #openbravo and in the development mailing list.
- When the release start date comes, we make sure that all the tests in pi run successfully.
- Once everything is green and good, we merge pi into main.
- QA starts the manual test process, testing those parts that are not automated yet.
- If major issues are found, we transplant changesets from pi into main.
- Main does not always contain stable code. Whenever we do a transplant there's a potential risk that so we need to run all the tests in main again. So main can only be considered reliable on release time. And we want the tip of the repository to always be trustworthy.
- Experience has shown that pi tends to be unstable. And this annoys developers and the release engineering team (us).
- Depending on the number of the commits and changesets pushed to pi since the merge to main was done, if a transplant is required there is a potential risk. Because pi is more advanced in features and fixes. And we don't want to freeze pi.
We use Mercurial as our SCM, so its distributed nature will be an invaluable help solving this problem. We are already running bunch of builds and tests. This is great. We now wanted to automatically mark those revisions as "good" or "tested".
Need for integration stage
Doing integration in main is a bad idea, doing it in pi is even worse. We already have pre-integration and the final main repository. So having a integration repository is a natural choice that fits in this model.
Now we have pi, main, int and builds
How would this work then with a integration repository? (let's call it "int")
- Developers work on pi. A set of tests are run on pi to detect silly problems.
- int pulls from pi from time to time, and triggers a set of builds and tests.
- If all the builds and tests are run successful, we can consider the tip of int as "good". So int pushes all the changesets to main.
- Repeat the process forever.
Step 1: Create first/top job that does
* Clone int locally in the system (int-1)
Step 2: Incremental build for PostgreSQL (erp_devel_int-inc-pgsql).
* This job polls from int-1. If there are changes it runs the job.
* If the job is successful, it pushes the changesets to a new repository, int-2.
Step3: Incremental build for Oracle (erp_devel_int-inc-oracle).
* This job polls from int-2. If there are changes it runs the job.
* If the job is successful, it pushes the changesets to a new repository, int-3.
(. . .)
Step 11: Smoke test on Oracle (erp_devel_int-oracle-smoke-test).
* This job polls from int-10. If there are changes it runs the job.
Step 12: Promote pi to main (erp_promote_pi_to_main)
* If the job is successful, it pushes the changesets from int-11 to main
What do we achieve with this model?
- Logical order: if an incremental job fails a full build will not be triggered. Because the full job is polling for changes from int-3, but as the job failed no push to int-3 has happened.
- The revision tested by the last job has been tested in all the jobs. So we have the guarantee that it has passed all the tests. And we can push it to main.
- The short-time jobs do not have to wait for the long time jobs. Not all the revisions tested by job 1 are tested by job 2. But the opposite is always true. All the jobs tested by 2 have been tested by 1.
- The model does not depend on a specific Continuous Integration software.
Comments OffContinuous Integration (CI)
- Continuous integration servers constantly monitor source code repositories
- As soon as new changes/commits are detected, they initiate a new build cycle.
- Build cycle actually involves code compilation and, in addition, may involve various tests and code analysis.
- If the process encounters errors, it notify the build master
- Installation and Configuration (friendly web GUI with extensive on-the-fly error checks , in-line help.)
- Extensibility through plugin
- Permanent links (gives you clean readable URLs for most of its pages)
Hudson a continuous integration tool, which schedules the job process as cron job,
- It is primarily a set of Java classes likely where Hudson is the root Object model, it has 'project class' and 'build class' and some 'interfaces' to perform some part of build like scm
- Hudson classes are bound to Staple (Stapler is a library that "staples" your application objects to URLs, making it easier to write web applications. The core idea of Stapler is to automatically assign URLs for your objects, creating an intuitive URL hierarchy.)
- To render a html pages hudson uses Jelly (is a tool for turning XML into executable code. So Jelly is a Java and XML based scripting and processing engine)
- It uses file system to store, data directories are stored in HUDSON_HOME as plain text like the console-output (few as java properties file format, Majority of them uses Xstream, eg : project configuration, or various records of the build)
- Hudson can be installed either by running the hudson.jar file(ex. java -jar hudson.jar) or just by deploying it in a servlet container.
How I deployed
I deployed Hudon tomcat on port 8080 and redirected to port 80 (default) using mod_jk and apache on a gentoo operating system
- deploy the war file in tomcat directory
- Add "-D JK" to /etc/conf.d/apache2, in the APACHE2_OPTS line.
- Add "JkMount /* ajp13", at the end, before to /etc/apache2/modules.d/88_mod_jk.conf