Posts by Harpreets layout:
- One is like my as colleague Juan Pablo says "It should be a JAIL for the user", so that he cannot move outside the directory.
- And the other one is like simple ftp which allows you to move around but not able to read or write unless you have permissions.
- Edit /etc/ssh/sshd_config to include this.
- Create the sftpuser and set it's shell acess to false, so that user is not able to do ssh.
- Give correct permissions to sftpdir.
- For increasing security I changed these also in /etc/ssh/sshd_config:
- Create a user with /usr/lib/openssh/sftp-server as shell and /var/www/sftpdir as home dir.
- Add this /usr/lib/openssh/sftp-server to /etc/shells file
- For increasing security I changed /etc/ssh/sshd_config and also added my key to /var/www/sftpdir/.ssh/authorized_keys file.
- Set correct permissions of the sftpdir.
- Easy to configure.
- Good in Security.
- Can work with PubkeyAuthentication.
- No extra installation (as uses SSH).
- Easy to use SFTP client.
These are the first words we usually say when we see our production system stuck/crashed due to some reasons.
But "Someone helps those who help themselves." --Harpreet Singh ;-)
Now jokes apart.
But suppose we get an alert before the systems fails may be during the first stroke or when the load starting going high or may be when total processes were more or any thing related to our applications running on the server.
Wouldn't this be like a boon, a chance to save the system in time?
If you have seen my earlier posts you will find monit/munin doing the same, but as on the way of my learning, I found that nagios is a better (easy and more flexible/plugable) tool.
Before starting to explain on why my opinion changed I will ask you one question here.
What do you expect/need from any system/application monitoring tool?
The general answers would be:
- Good UI.
- Easy Installation.
- Easy configuration.
- Good coverage over different applications and system.
- etc. etc.
Now lets see if nagios answers all of these?
Like other tools nagios also has client-server architecture, which gives us freedom to monitor any number of systems/applications from one nagios server.
It has a easy to understand & configure UI, through which you can do many things like scheduling, controlling alerts etc. And if you are a CLI lover (as most of Linux geeks are) then you can do all those from command line also.
Now here comes the most impressive part.
Nagios is highly flexible. First of all it has huge plugin base already available for you to work with.
But if that is not enough for you, then ask yourself just one questions.
Do I know how to write a script (bash, python etc)?
I usually say one line for nagios, that "If you can do it through CLI, you definitely can do it with nagios." Same is the answer for the question you asked yourself (above). If you can write a script to perform any action (login check, api calls, application query etc.) and get a small readable/understandable output (for both
success and failure cases). Then it's like a kids play to integrate it to nagios and see the same results in UI.
In simple words:
- Write a script to perform certain action.
- Copy that to the nagios script directory (just to ensure that you/anyone doesn't accidentally deletes it).
- Add that to the nagios commands.
- Call that command for the host you want.
- And done.
Another plus part is that you can flaunt in front of your seniors about the work done (with minimal effort involved) ;-)
So let me take you on a brief tour.
Some time back I got a chance to work on Performance Testing of our application (@ openbravo).
On this opportunity one quick questions popped up in my mind: "what do we seek in any performance testing tool?"
- Easy to adjust with our requirement.
- Easy to operate.
- Easy reports at the end.
In our case Jmeter answered them all.
- With Jmeter recorder we were able to record the flow of the application. And with some adjustments like variables, assertions, halts, etc. we reproduced the scenario where a normal user will be using the application.
- After creating the script we can easily run it either from Jmeter UI console or CLI.
- Now for reports Jmeter itself has good number of ways in which we can view reports (graphs, tree view etc), but for sharing those reports is a bit of overhead, so another simple way is to run the test through Hudson (CI tool) and view reports using Hudson's Jmeter plugin.
So basically before any end user/customer shouts "Why is my application slow?", this tool integrated with hudson does a regular check of the delay, failed requests, etc to track the performance and give you an upper hand over other similar applications.
When I was on my way with this work someone asked me "RM and performance testing?", "shouldn't QA do that for you?"
I have a single answer to such question and truly writing there is nothing like mine or your work. It's only a perception, if we have time and scope of doing something then I think we should extend our hands and do it.
Many of you would have started thinking about the FTP servers till now, but to be more clear here I am talking about SFTP (SSH File Transfer Protocol).
But as Shakespeare said "What's in name, the thing we call rose would smell as sweet with some other name".
And so is true for SFTP, as this provided the usability that I was looking for and that too with minimal configuration with some extra benefits which we will talk about in the last.
And not only that I was able to do this in two different ways.
To know it better I think a use case will be really helpful.
So I will put down the requirement that pushed me to learn about it.
We needed to grant permissions to one directory to a user, with one directory I literally mean that, as we wanted to block him from peeping into other things.
That too with minimal access to system binaries and it should be secure etc etc.
And SFTP was the best suit for the requirement, you will get the answer of how in the next section where I have shown the configurations of both the cases and believe me it couldn't have been simpler.
Lets get into the jail first ;-)
Subsystem sftp internal-sftp
Match User sftpuser
ChrootDirectory /var/www/sftpdir (this makes the user stay under one directory)
useradd -m -s /bin/false sftpuser
chown root:root /var/www/sftpdir
PasswordAuthentication noAnd I also added my pub key to /home/sftpuser/.ssh/authorized_keys file, but this is optional as this doesn't make any effect on the SFTP working.
For this jail method we are done.
Connecting to localhost...
Enter passphrase for key '/home/user/.ssh/id_dsa':
Now lets get back and see the next way (I know most of us will not read this, as first one will work like a charm):
sudo useradd -s /usr/lib/openssh/sftp-server -d /var/www/sftpdir sftpuser
echo "/usr/lib/openssh/sftp-server" >> /etc/shells
chmod go-w /var/www/sftpdir
chmod 700 /var/www/sftpdir/.ssh
chmod 600 /var/www/sftpdir/.ssh/authorized_keys
Connecting to localhost...
Enter passphrase for key '/home/user/.ssh/id_dsa':
Now about the extra benefits:
That's it for taday. Happy SFTPing.
"As more the better" --Harpreet Singh ;-)
Same has been proved by the new Amazon Cluster Compute Instance (cc1.4xlarge).
Amazon recently anounced the availability of it's biggest Instance, ideal for cluster infrastructure, as it promises high connectivity between cluster instances (as high as 10 Gigabit Ethernet).
But we tested this instance for the standalone test with Oracle DB and Openbravo on same instance.
The results were realy exciting as it was able to handle around 270 concurrent users.
Now truely speaking "Thats what I call results."
And the same results have been added to our Sizing Tool Results.
"There is no such thing as a free lunch." --Milton Friedman
This instance also has some drawbacks:
- It costs a lot (almost $1.60 per hour)
- Till now it is only available in US (N Virginia) region.
- And is only available with CentOS.
Now here comes another one.
As cost is the biggest concern when we think of any new infrastructure. For example: Running an instance (which can support 10 concurrent users) for 3 (THREE!!!) years on Amazon EC2 would cost only $1217.60, I think these figures can help one think about on-site and in-cloud (EC2) deployments.
So we extended our Sizing Guidelines to help you choose your Amazon Instance.
In the last section of the Guidelines we have added:
- Steps you can follow to calculate your yearly cost with Amazon cost calculator.
- As Amazon calculator is a bit complex so we created simple calculator to help you out.
- And pre calculated cost for most common scenarios.
To be the first all you need to do is get Ubuntu Maverick Meerkat up and running any where you like may it be your hardware system, Amazon EC2 or a virtual machine.
This link will help you if you are planning to install on hardware system or virtual machine and these AMIs to boot one in EC2
Once you are set then lets rock and roll. I mean start installation.
So all you have to do to install Openbravo is:
- Enable the Partner’s Repository:
* sudo add-apt-repository "deb http://archive.canonical.com/ubuntu maverick partner"
- Install the openbravo-erp package:
* sudo apt-get update
* sudo apt-get install openbravo-erp
You can also install it using Synaptic or the Ubuntu Software Center:
Can installing of any comprehensive ERP be simpler than this?
You can do it even on a Friday. ;-)
As installing most ERPs on a Friday means forgetting about your Friday and Saturday night fun. With Openbravo on Maverick Meerkat, you can start the process at 7 and be at the party by 9!
So once you are done with the party sorry I mean installation you are set to use it and be a proud user of Openbravo ERP.
As it's you love and support that we have been able to live up to your expectations.
Also the users/developers who want to upgrade from 10.04 to 10.10, can do that without a fear of breaking the installation, the only concern should be that 10.10 is not a LTS version :-(
For more on installing Openbravo in Ubuntu please follow this wiki.
I wrote this line just few minutes before writing this blog, as my need of optimizing PostgreSQL's performance lead me to search/discover for some cool facts and features of postgres and tools related to it.
Postgres doesn't support too many users (concurrent) by default, it comes with very solid configuration aimed at everyone's best guess as to how an "average" database on "average" hardware should be.
Postgres has some default configuration options to fine tune it, like:
- etc etc.
One and the only con that I saw in this is that it is external, I mean we have to configure an external tool to do connection pooling.
- Connection Pooling: It reduces connection overhead, and improves system's overall throughput.
- Replication: Using the replication function enables creating a real-time backup on 2 or more physical disks.
- Load Balance: As the name suggests it distributes the queries on two or more replicated servers.
- Limiting Exceeding Connections: With the use of this extra connections are queued instead of returning an error immediately.
- Parallel Query: Using the parallel query function, data can be divided among the multiple (replicated) servers.
To read more about performance tuning in postgreSQL read this.
For more on pgpool click here.
Long back we (RM @ Openbravo) introduced CI (Continuous Integration) tool (Hudson) for testing code of our core ERP development branch.
Which allowed our developers to do:
- Daily Builds (Full/Incremental).
- Smoke Tests.
- DB Consistency Tests.
But as most of the developers were becoming modular (Openbravo became modular with Openbravo ERP Version 2.50), CI was not able to maintain the pace and provide similar help for the module testing.
To set things in place we enabled our developers to integrate and test their modules using new CI even after committing even a single changeset to their module repository.
According to me the goal or I will say the purpose of this whole effort was to enable a developer to have a nice and sound sleep after he pushes his commit to the module repository.
Let me explain it.
Earlier developers use to develop a module and used to do time consuming small manual testing to make sure that their code is bug free.
From a developers perspective he cannot sleep properly until his module is tested and deployed properly.
To enable CI for modules and save developers time (from manual testing) we created the setup which will help developers to directly configure a new job in Hudson to test their changesets. This new setup empowers them to do:
- Sanity Check
- Source compilation check and Create OBX
- Database consistency test
- Module's JUnit test
- Installation of the generated OBX
- Un-Installation of the installed module
- Selenium test (module smoke)
- Upgrade the module from previous published version in CR (Central Repository) to generated OBX
Even for a single new changeset in the module's repository.
And still it has endless possibilities, where we can integrate new test cases to this.
We have also created a template job (which is pre-configured with all these test cases) to help developers configure and run tests for their modules easily.
Developers will just have to copy the template job to a new job, change the variables to their modules related variables and then run the job. We have also created a simplified wiki for step by step instructions.
* Currenty Openbravo developers working on any Modules can take benefits of this tool (but sky is the limit, maybe someday we can allow partners/community to take advantage of this tool).
RM Updates: Amazon backup stratergy, Mantis Upgrade, Establish automatic process for releasing 2.40, OB@OBJanuary 7th, 2010 These are the latest news from the Openbravo's Release Management Team:
Backup Strategy: EBS boot.
Amazon has a new feature ebs boot, this helps us to keep our root partition in ebs volume and also allows us to have data up to 1TB in root partition. This helps us in the following way : better backup strategy and from now we can pause & re-start an instance and thus saving cost. My colleague gnuyoga has a blog about the same.
Mantis Upgrade: Upgrade issues.openbravo.com to mantis-1.2.0
As you know our existing issue tracker is based on mantis 1.1.8. With the release of mantis 1.2.0, it promises lot of interesting productivity boosters. We are migrating our current mantis to latest. This involves quiet bit of challenge. In this sprint we address customization like SSO (Single Sign On, etc), and custom css. If you want to be a beta tester to testing our new mantis please drop us an email for us to give you a test account.
Continuous release of 2.40 branch
So the mantra of 2.40 branch is continuous release as detailed in my colleague juan pablo's blog post. Now this task is complete and for details see here
OB@OB: Documentation and Linux tool
This task was about documenting the process of replicating production environment to testing environment and creating a new tool that automates this process in linux.
Simple but effective http basic auth is probably the quickest and the easiest answer.
Setting it up requires only two things:
- htpasswd file (containing valid user name and password)
- And apache configuration file to read it.
Creating a htpasswd file:
- htpasswd -cm < /Path/tp/htpasswd-file> < username>
- While adding more users just remove c from the above command.
- Add this to default (vhost file) configuration file
Allow from all
AuthName "Restricted Area"
AuthUserFile < /Path/to/htpasswd-file>
- Now reload apache and enjoy.
Comments OffContinuous Integration. The team is working really hard on finding a solution for existing challenges as well as proposing ways to automate current repetitive tasks.
Last sprint we have completed one of the most challenging tasks "automated code migration from pi - main". Now we have obx generated from main branch if all the tests are successful. Plans to generate an obx on every commit are heavily debated within the team.
Now we have tecnicia14 resurrected. This will help our developers as well as our QA team to see the code changes in the live environment (live and liveqa).
Apart from the CI infrastructure, we have also upgraded the Issue Tracker version to 1.1.8 which is the latest stable mantis version available. We are also in the process of ensuring we have a hard backup of all the important instances running in Amazon ec2.
For a complete list of the on-going stories that we are working on, please check the Sprint 28 page of our Scrum spreadsheet