Posts by galderromo:
- Master data & historical data: Master data or reference data(business partners, products, accounting plan, assets, banks, taxes, production plans, etc.) is what really makes sense to migrate instead of typing them manually. Historical data (orders, receipts, shipments, invoices, bank statements, amortizations, accounting entries, etc.) can be usually reviewed on the old system, so probably we could convince the customer it is not really necessary. Somehow, it is important to realize a historical data migration always requires a master data migration, and also note a historical data migration can take 4 times the master data migration. (20% & 80%).
- Accounting data & management data: There is also also difference between migrating just the accounting data or having to migrate all the invoices, bank statements, daily cashes, etc. in order to create the accounting data by the new application. Accounting data migration may take 20% of time than management data will.
- Period of time: Although the period won’t change the scope of the migration task, for sure, more defects and inconsistencies will be found as larger is the period. If we ask to the customer about the migration period, they will always say: “all existing information”.
- Extraction and transformation tasks included & excluded: Too often, when talking about migration we forget extraction and transformation tasks. Sometimes it is clear we do not know about the old system or the customer does not want to give us access to it, so it is clear we can’t deal with the extraction and transformation, but anyway, it is important to make it clear who is responsibe for data extraction and transformation. Not extracting data correctly when we have invested time on it can be a big issue. I’ll always try to ask for extracted and transformed data, ready to be loaded.
- Extracted data consistency and validation: If we can’t avoid extraction and transformation tasks, make sure the customer validates what you are going to migrate. It is useful to deliver lists or spreadsheets containing the final migration result before we finally load it so the customer can validate it.
When using the grid view for amounts, Openbravo includes a very valuable functionality. If you select all the columns you want to operate, and move your mouse to the header of the column, selected column’s total amount will be showed on your browser’s bottom left side corner.
It is usual when developing or on production environments to include improvements on your OpenbravoERP instance. Therefore it is needed to deploy a new war file.
Although it could sound obvious, many of us usually stop tomcat service, and erases old deployed sources on webapps. Then new war file is copied and tomcat restarted.
In order to be more efficient and reduce the time tomcat takes to stop and start, it is better to also stop/start apache services. I mean, stop httpd, stop tomcat, copy war file, start tomcat and start httpd. It looks like it should take longer, but it doesn’t. Apache takes short to stop and start, and having Apache stopped, Tomcat stops faster.
This way the interruption will be smaller. Both when developing or on production environment.
Implementing OpenbravoERP in mid and big size companies, sometimes means having plenty of new developments and customizations, apart from having a bigger pool of users. Therefore the production server will need more resources.
When in a production environment there is a large server with plenty of resources, OpenbravoERP needs some parametrization in order to operate exploiting all resources properly.
Some improvements based on my experience, having a 20G server:
Related to Ant tasks:
In order to avoid java heap space when many sources have been developed, increase build.maxmemory in build.xml file. By default, from OB 2.50-MP12 ahead, this parameter is set to 1024M for 64bit servers and to 512M for 32bit servers. It will also compile faster.
Related to Tomcat:
In order to assure higher efficiency and lower response times to the users, customize some tomcat parameters. In file /etc/profile.d/tomcat.sh, change -Xmx parameter based in your own criteria. Realize 64bit servers need more resources than 32bit ones. The above is just an example:
export CATALINA_OPTS=”-server -Xms128M -Xmx2560M -XX:MaxPermSize=256M -Djava.library.path=/usr/lib64″
It is important to check the existing Tomcat documentation before you change anything, and change it first on a development environment.
Related to PostgreSQL:
In order to assure better database performance, edit /srv/pgsql/8.3/postgresql.conf and change this parameters shared_buffers, checkpoint_segments, maintenance_work_mem, wal_buffers and effective_cache_size.
Take into account that you will probably have to set a new value for SHMMAX, this can be done adding kernel.shmmax = 8589934592 to /etc/sysctl.conf. Where 8589934592 is the result of doing: 1024M (defined as shared_buffers) * 1024 bits/M * 8192.
Again, it is important to check the existing PostgreSQL documentation before you change anything, and change it first on a development environment.
Remember, this is just my own experience.
Managing Reference Dataset is a new functionality based on Modularity Project. Its goal is to be able to provide some module information, apart from the development, so the user gets the module fully installed when applies the module.
But as many other developed functionalities, Reference Dataset can be also used in many ways. For example:
- - If you are preparing your final environment and few users introduce valuable information on your testing environment, using Reference Dataset is very easy to move master data from one environment to other one.
- - I guess it could also be useful for off line synchronizations between disconnected OpenbravoERP instances or between two any applications. For example, having PDAs as routers.
What ever is your situation:
- - Create a module, and check “Has reference” check box to “Y”.
- - Set up a reference data, type Organization. Define tables and columns you want to include or exclude.
- - Export reference data, it will create a XML file.
- - Export the database.
- - Export the module.
- - Import the module on your destination environment.
- - Go to Enterprise module management (General Setup-Enterprise) and import reference data.
Of course, some improvements can be achieved to simplify this process. For example allow exporting and importing XML files without having to manage modules. It is already registered as feature request.
There is also some more documentation available.
Do you have any other situation where reference data is useful?
Using them separately is easy and not complex, but when using more than one together requires a execution sequence knowledge in order to achieve your development design.
A couple of days ago, developing a new module I included an auxiliary input, some preferences, more than a callout and many validations on the same WAD generated window. Furthermore, all where related to just one database column.
I mean, there was a column including a validation based on a auxiliary input with a callout. And the field on top of this column was affected by a preference.
How do all these functionalities work together? What is the execution sequence?
- - Auxiliary input is included on html generation time.
- - Validation is done on execution time, when loading data for drop down list (based on a global variable o auxiliar input).
- - Preferences are loaded, so a change is performed on this field.
- - Callout is executed based on the preference value change.
As callouts might be tricky if are not properly developed, it is important to know when they are executed and when not. It took me a while to realize there was a preference interfering with my callout.
Take it into account!
I would like to talk about a very useful tool I have just find out: ScrapBook.
ScrapBook is a Mozilla Firefox add-on to save web pages and have a look at them later offline. It is really useful when your Internet connectivity is very bad or slow.
It saves web pages, single or multiple. I mean, not just a single web page but all related links in the page you are saving. You can decide depth. You can later on highlight some text, add a sticky comment, remove some content you do not want to save from a page, etc. It also includes folder and page management for “scraped” items, importation and exportation tools, size calculation features, etc.
A really interesting for those who have limited connectivity and need to check some documentation web pages frequently.
For example, I have the Openbravo wiki (http://wiki.openbravo.com) “scraped” on my laptop. It is amazingly useful. I get the information as if I would have the wiki on my localhost. Here you have a screenshot:
Do you have any similar interesting tools useful for Openbravo Community?
Modularity gives plenty of options and flexibility when developing, backing up, sharing, updating, customizing, populating, training, installing, etc.
- Developing: When having more than a development on the same environment, being able to separate each artifact into modules makes developments much more structured. You can package a module and plug or unplug it.
- Managing: If you want to manage and supervise your developers work, is easy to do it plug in the modules, verifying and unplug in.
- Backing up: Is enough to execute ant package.module to have a backup of whatever you are developing. If you get on the wrong way, you can unplug the module and plug in your backup.
- Sharing: Sending developments from one developer to other is easy. Sharing developments with de community using the central repository is very easy too.
- Updating: Jumping from one maintenance pack to other is easy using modules. Plug in the .obx file is enough.
- Customizing: It is possible to customize in a development environment (once customization flag is set to true), hide some fields, show other owns, change properties (read only, mandatory, drop down, length, etc.), etc. Then, you can export this parametrization and move it to production environment easily. If other developers of the project need to include some more parametrizations, can install same module, generate a new version, make changes and apply it in production environment.
- Populating: Using reference data populating parametrization tables is very easy.
- Training: Having needed sample data in order to build a demo or a training with not to much effort is simple using modularity.
- Installing: Once you have developed all the modules by different developers, it is easy to plug in all the new modules into production environment. You can build a production environment from scratch and adapt it using already developed modules. Plug in the modules and it’s ready.
And for sure, there are many more utilizations for modularity.
Do you have some more?
We, consultants, aren’t so techie as developers are and changes take a bit longer. During last couple of months I’m introducing myself to PostgreSQL: installing sources, developing, installing database client, etc.
Just in case you are in a similar situation:
- On config/Openbravo.properties file, bbdd.sid parameter is a different concept from what you are used to in Oracle. In case you have an environments and want to install a second one, bbdd.sid must be different. Otherwise, when executing ant install.source your first database user will be erased. Conclusion, when using PostgreSQL, per each OpenbravoERP installation you will need a different database.sid. This is, bbdd.sid concept is different in Oracle and in PostgreSQL.
- When accessing database throw a terminal-based front-end, Oracle uses SQL+ while PostgreSQL uses psql. They are very similar and really helpful when accessing OpenbravoERP server using ssh connection.
Please feel free to comment any additional main differences when switching from Oracle to PostgreSQL.
Usually, nor sales responsibles neither consultants give data migration task the importance they should during the sales process, but they consume a really hugh amount of not planned time.
Migration tasks include: extracting, transformating and loading which are also known as ETL tasks.
As we all agree, the effort required to migrate the data of a small enterprise who manages all their activity using spreadsheets is not the same as a large company with a previous ERP system that wants to keep all its historical activity.
Therefore, I would like to enumerate few points to be taken into consideration, before delivering a proposal, for a correct effort evaluation:
As I tried to explained, migration offering needs to be detailed as possible in order to avoid future issues or project deviations from proposal.
Although the title sounds strange it has been a pleasure discovering POI!
As I mentioned a couple of weeks ago on a post, POI is an Apache project for spreadsheets generation. I had the opportunity to know about this new tool while advising Microgenesis in some of their current projects. Their customers required them quite difficult dynamic reports, thus, they decided to use POI.
This two, showing below (click on the image if you do not see the animation), are some of the very interesting customized reports developed for their customers:
Generated dynamically with the posted document’s information, allows the user have a general view of the accounting information during the last 5 years and also drilling down to each account’s amount. It is also possible to include some accounting ratios (liquidity, profitability, activity, profit margin, etc.) for a financial statement analysis and facilitate a better understanding of the information.
Generated dynamically with all assets and amortizations information. Gives a whole idea of an enterprise assets value and situation for the selected year.
Both examples are generated as spreadsheets with all its functionalities: the user can change the information managing different scenarios and situations, create charts or graphics, edit and include new formules, etc.
I will give some technical explanation and code examples on next posts.
P.D.: You can also find some more POI examples here.