Posts by Juan Pablo Aroztegi:
- We care about the ERP’s stability: a major release should be there for a long time.
- We care about eliminating headaches to the system integrator: upgrades should be easy. And don’t force me to upgrade to the next major version, let me skip one if I want to.
- I want a featureful ERP: one that helps me get the job done in a pleasant manner. Allow me to get the latest features in an easy manner.
Let’s play a game: find a word that describes in the best possible manner how your ideal ERP should be. Think about it for a minute. Many of you have probably answered Stable. I mean, once you implement it in a customer you want it to behave as you designed it to be. No more, no less. No inconveniences, no surprises. That’s stability, right? Admittedly this sounds pretty good.
Some of you, on the other hand, might have answered Featureful, Useful, Pleasant or even Gorgeous. But hey, hold on a minute… if you want something to be Stable it can’t possibly be Beautiful, Pleasant and Featureful at the same time. You know, business software is boring by design. That’s how it’s been up to now and don’t expect it to change in the future. Learn to live with that.
Really? Is this true? How can we find a balance between offering a user a stable but featureful, pleasant and beautiful product? Imagine an ERP that you enjoy using. Is that even possible? We believe it is, and this is why we have redesigned our releases life cycle and support plan. We started applying it in June 2010, now let me explain you how it works.
Based on these principles, we have defined three phases for every major release:
- Preventive Support:
- Customers receive support according to SLA.
- Customers are entitled to report defects and expect resolution.
- Openbravo ships regular maintenance packs.
This phase starts on the release day and ends 6 months after the second to next release, with a minimum duration of 2.5 years.
Example: 2.40 was released on November 2008. And 3.0 will be released on March 2011. Therefore the Preventive Support phase for 2.40 lasts till September 2011 (March 2010 + 6 months).
- Reactive Support – phase 1:
- Customers receive support according to SLA.
- Customers are entitled to report defects and expect resolution (major severity only).
This phase lasts for 1 year.
Example: 2.40 finishes the Preventive Support phase in September 2011. And the Reactive support – phase 1 lasts till October 2012.
- Reactive Support – phase 2:
- Customers receive support according to reduced SLA (no critical tickets accepted).
This phase lasts for 2 years.
Example: 2.40 finishes the Reactive Support – phase 1 in September 2012. And the Reactive support – phase 2 lasts till September 2014.
Let’s illustrate this so that we can get the idea visually:
Note that Openbravo 3.1 and 3.2 are fictional releases, they are not planned nor developed yet. They are listed here to illustrate the support plans for 2.50 and 3.0 if the future 3.1 and 3.2 were to be released in 03/2012 and 03/2013 respectively.
As you can see in the graph the minimum support length of a release is of 5.5 years. Feel free to check the complete document in our wiki for more details.
We’ve seen there are different user profiles: those more conservative who look for a “Stable” product above all. And those somewhat more aggressive who look for new features and additional stimulus. Which profile fits better to you? The good news is we have a plan for both.
Tagged: Release process, Releases, Support
Up to now the 2.50 and 3.0 versions of Openbravo have shared the same Core. “Core” is what we call the most essential module of Openbravo, the one that includes standard functionality like Financial or Sales Management.
If 2.50 and 3.0 have shared the same Core, what’s the difference between them? While a default 2.50 installation includes just Core, the 3.0 version includes a distribution of several modules: right now “Core” plus 16 more.
In terms of source code management we decided to keep one code line as much time as possible, simply because it reduces our developer’s backporting work. Backporting basically means that a bug fixed in 3.0 should also be fixed in 2.50 (if it affects 2.50, of course).
The news is that we have now reached the point where the 2.50 and 3.0 Core cannot share the same code any more. So we are planning to branch the 2.50 and 3.0 code-lines around the 15th of November.
And that’s not all about it. According to our plans we’ll be releasing version 3.0RC3 precisely the 15th of November. The extra news is that we’ll be having two 2.50 MPs on November, 2.50MP24 and 2.50MP25. Why? Simple, as the “Core” is still shared between 2.50 and 3.0, version 3.0RC3 will be needing some code not ready yet in 2.50MP23. And this is why we need 2.50MP24.
In terms of timing, we’ll freeze 2.50MP24 the 1st of November and release it the 15th of that month. So are we releasing both 2.50MP24 and 3.0RC3 the 15th of November? That’s right, and starting from that point the 2.50 and 3.0 Core code-lines will follow their own paths.
Lastly a couple of notes about how this affects to the 2.50 to 3.0 upgrade process:
- While we are introducing a separate branch to simplify our code management, 3.0 is just another version of core and updating from 2.50 MPx to 3.0 is not any different than updating from 2.50 MPx to 2.50 MPy. In both cases, it is a full code replacement of core and, as long as the system has been configured using the principles of modularity and does not contain any core customization, system configurations and extensions are going to be preserved during the upgrade.
- We will be cleaning the 3.0 core more aggressively than we are doing with the 2.50 core. Because of that, we can expect a higher volume of API changes in the 3.0 code line for the next few months and up to general availability date. However, we will continue to be conservative in 3.0 as well and accept only low risk API changes in an effort to minimize the chances to break module dependencies. 3.0 API changes will be listed in the Release Notes and users intending to upgrade from 2.50 are encouraged to monitor those notifications.
Tagged: API, Mercurial, Release process, Releases, SCM
Let’s say you’ve been working hard customizing Openbravo for a customer. And now you’d like to deploy it, so that the customer can start using it immediately. Next step? You go ahead and start installing the ERP on a server… wait! Not yet! You first need to think about the hardware, don’t you? Not so relevant nowadays? A commodity? Try to answer these questions and you’ll see why it matters:
- How many employees does the company have?
- From these employees, how many of them will use the ERP?
- From these users, could you classify them depending on the amount of time and intensity they’ll devote to the ERP?
- How do the most typical user flows look like? Standard ERP flows? Customized ones? Heavy report generation or long processes?
As you can imagine the hardware requirements differ quite significantly depending if you need to support 5, 20 or 100 concurrent users.
And let me ask you two additional questions:
- Do you care about saving costs?
- Do you like the feelings (aka headaches) arisen when you notice that your server just cannot cope with all the requests in the ERP? Imagine you’ve just bought some new hardware and it’s not enough. Even worse, the customer can no longer work under those conditions, they’re unhappy because of having to buy new hardware again and they do not trust your abilities to choose the right hardware this time.
“Well, of course I care about costs, my time and the service level I provide to my customers! Are you kidding?”
OK, I might have dramatized it a bit. Have I? If you’ve ever participated in an ERP implementation process, you’ll know that these things happen. Obviously no one chooses to consciously increase the project’s costs, look for headaches and decrease the overall customer satisfaction. But these might be some of the consequences of incorrectly sizing your Openbravo installation.
Sizing? Sizing, you say? A sizing is an approximation of the hardware resources required to support a specific software implementation, in this case Openbravo. And I have good news for you, for all these inconveniences can be avoided by using the Sizing Tool we’ve just developed for you!
In a nutshell, you simply tell the tool how many concurrent users you want to support and it tells you what hardware you can use. And in case you want to calculate it using your own flows, it’s enabled to allow that as well.
To give some of the behind the scenes details, we have written test cases for the most common Openbravo flows using a performance benchmarking tool called JMeter. This, summed to our experience with real customers has outcome in the Sizing Tool and Guidelines.
Additionally there’s some interesting data we have proved with this tool. For instance we’ve been specially pleased to see that Openbravo supports 100-250 concurrent users without a hitch (given the proper hardware, of course).
Tagged: Performance, Sizing
Stability is a keyword inherently attached to ERP systems. System integrators and end users want a system that simply offers them the capabilities they need in a pleasant manner and at any moment. In this case stability means that it always works in the way they expect it to work. Openbravo takes this challenge very seriously, as you can see in our current 2.50 MPs (Maintenance Packs) release process:
- We run a set of automated tests on every commit, which in practice means a 24/7 job: build tests, sanity tests, upgrade tests, functional tests, etc.
- As a general rule every commit is related to a resolved issue. The QA team, together with the development team, leads the effort of individually verifying the correct resolution of these issues.
- The QA team performs complete set of manual tests before releasing a maintenance pack to guarantee the quality of the release.
This is the global picture of our current 2.50 MP release process, delivered at the beginning of every month. Now we would like to go beyond this by offering an additional service level in our MPs. And precisely the Life Cycle Management feature introduced in version 2.50MP20 makes this possible: whenever a released MP stays live without known issues for 40 days we’ll tag that release in a special manner. So that system integrators can choose to either use the current stable versions or the ones that have matured for 40 days.
This Life Cycle Management feature introduces the concept of Maturity level for modules, which allows Openbravo to make use of its statuses as follows:
- Test: primarily used by the Openbravo QA team. This is the pre-release status.
- QA Approved (old name:
Controlled release): this is the current maturity level of the MPs once they are released. It means they have passed the automated tests, the issues have been individually verified and the QA team has run a comprehensive set of manual tests.
- Confirmed Stable (old name:
General availability): the module has passed 40 days in the QA Approved maturity level without any known issues.
So in practice, what does this mean for a system integrator? Simple: those looking for exactly the same maturity level of the current MPs should use the QA Approved status. And those who would like to go beyond this level should consider using the Confirmed Stable status.
Configuring this is as simple as selecting the desired setting in the Module Management Console:
Note that Confirmed Stable is the default option for all the modules. We’ll apply this policy for Core starting from 2.50MP23.
EDIT, 2011/07/04: the maturity levels have been renamed as follows:
- Controlled Release → QA Approved.
- General Availability → Confirmed Stable.
Tagged: QA, Release process, Releases
These have been some hard working but passionate weeks for Ken, an Openbravo ERP Core developer. He’s been working on implementing a couple of features in the accounting engine. In fact he’s pretty excited:
- “Wow, this is really going to be valuable for our users!”.
But hold on Ken, you think you’ve finished the development process but you haven’t. Let’s go back to the beginning, so that you can understand.
Once upon a time Ken and his team detected the need to developing a new feature. So being this a relevant change in the Core of the product, Ken decided to start developing this new feature in a new Mercurial branch. It’s the common sense choice for these cases, because he doesn’t want to integrate these changes into Core until it’s ready for general usage. Then he dived into the interesting part: he spent 2 weeks coding and writing some new tests for this new feature. So there he is, he wants the end users to enjoy this new development as soon as possible. This is legitimate and desirable, but Core has a golden rule which cannot be broken:
To see this simpler: all the balls in int must always be green. You might be wondering which tests they cover. Click on any of the jobs listed in the integration tab to see a description of what they do.
So Ken is now somehow a bit frightened, because although he has written quality code, and quoting his own words:
You never know for sure if you pass the tests until you pass them
And running the tests in your own machine is not an option, because from one side it’s difficult to set them up and from another side we’re continuously adding new tests to this process. As this is a legitimate concern, we’ve decided to solve this gap: we’ll provide Ken a way of running all the tests in a simulated environment, a real clone of the integration process.
So how can Ken run all the tests on his branch? Simple, just running this command:
hg push -f https://code.openbravo.com/erp/devel/try
And then monitor the jobs in the try tab. As of now only one developer can use try at a time. So you need to click on the Changes link of the active job to see if it’s testing your branch or not.
To make things easier to remember, you can add the following snippet to your main Mercurial configuration file (e.g. $HOME/.hgrc):
[paths] try = https://code.openbravo.com/erp/devel/try
So that if you ever need to run the tests in a branch, the process would be as simple as running this command:
hg push -f try
But Ken is confused:
- “How can everyone push to the same repository? We are using different branches. Isn’t this crazy?”.
The key lies on the -f argument we’re passing to the hg push command. If you’re interested in the basic internals of how this works, every time you push your commits you’ll create a new head in the repository, and the last commit of this head will effectively become the tip of the entire repository. And the try tests are run against this tip. Confusing? If you want to have a better understanding I suggest you to play with a local repository and see how it evolves as you push new heads. If you don’t care, just use it and forget about the internals.
So what’s next? This is just the beginning of this try feature, you can consult the reference documentation. Some future plans in our TODO list:
- Add e-mail notifications when your build starts, ends and with the final result.
- Increase the number of developers that can use this simultaneously.
This feature is available for all the Core contributors (Openbravo S.L.U staff and external contributors). Would you like to become a Core contributor? Great! Read the following guide.
I’m sure you have suggestions about this, so we’ll be happy to hear about them. You can do it either by adding a comment here or by joining us in the #openbravo IRC channel.
I would like to thank the Mozilla team for the original idea behind this concept.
Tagged: Continuous Integration, Mercurial, QA, SCM, testing
Ladies and gentlemen, fasten your seat belts. It’s here. Be ready for the new Cloud experience with Openbravo ERP and Ubuntu. For the long awaited Ubuntu release, codename Lucid Lynx 10.04, has been just released today! Two key words you should never forget:
- LTS: this is a Long Term Support release. A new LTS version is released every 2 years and it gets longer support and more focus on security, stability and maintenance.
- Cloud: first class cloud computing support in Amazon EC2 and Ubuntu Enterprise Cloud (UEC). It’s never been easier to work with your Openbravo ERP Ubuntu instance on the Cloud.
Fine, this sounds good. But how do I install it? First you need to get Ubuntu Lucid Lynx up and running. You can do this in mainly two ways:
- Install Ubuntu on a local machine: your own computer, a local server or a virtual machine.
- Run an instance on the Amazon EC2 cloud. Use the official AMIs. If it’s for a production server make sure you choose the EBS powered images (aka persistent storage).
And once this is done, install Openbravo ERP:
- Enable the Partner’s Repository:
- Install the openbravo-erp package:
sudo add-apt-repository "deb http://archive.canonical.com/ubuntu lucid partner" sudo add-apt-repository "deb-src http://archive.canonical.com/ubuntu lucid partner"
sudo apt-get update sudo apt-get install openbravo-erp
You can also install it using Synaptic or the Ubuntu Software Center:
For more details about the installation process you can check the complete intructions in the user’s manual.
This package is powered by PostgreSQL 8.4, OpenJDK 6, Tomcat 6 and the Apache HTTP Server 2.2.
Tagged: Cloud computing, EC2, OpenJDK, Packaging, PostgreSQL, Releases, Ubuntu
Ubuntu Lucid Lynx 10.04 LTS is around the corner. It’s the latest and greatest of the Ubuntu releases and it will be ready the 29th of April. It’s a special release, different to the previous Karmic 9.10, Jaunty 9.04 and Intrepid 8.10 releases. In fact it’s the most important Ubuntu release in the last 2 years. Why? The key lies on the LTS term, which is an abbreviation for Long Term Support. In a nutshell, a new LTS version is released every 2 years and it gets longer support for security and maintenance updates and also more testing. This means that it will be the natural choice for those who want to use Ubuntu on a production server for the next 2 years. Openbravo is aware of this fact and we’re getting ready for this.
Now we need your help for getting it fine tuned. Would you like to help out? Great!
We have just released a beta version of the package, so we encourage you to test it and provide us any feedback you find valuable:
- Tell us what you like about the package.
- Tell us what you don’t like.
- Find what you think that could be improved.
- Anything else you think is important.
Being a beta package, keep in mind these important notes:
- Do not use it in production environments.
- The official release of this package is scheduled for the 29th of April.
What’s new on this package? There is a significant change compared to our Karmic release: we have switched to OpenJDK! For those unfamiliar with OpenJDK, it’s a 100% open/free version of the Sun JDK, as well as 100% compatible. Some months ago we foresaw this need and started working to support it.
So how can I test the Openbravo ERP Lucid Lynx package? First you need to get the beta version of Ubuntu Lucid Lynx up and running. You can do this in mainly two ways:
- Install Ubuntu on a local machine: your own computer, a local server or a virtual machine.
- Run an instance on the Amazon EC2 cloud. Use the official AMIs.
Once this is ready, add our testing repository and install openbravo-erp:
- Add our testing repository:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C2F11D81 sudo sh -c 'echo "deb http://ppa.launchpad.net/openbravo-isv/ppa/ubuntu lucid main" >> /etc/apt/sources.list' sudo sh -c 'echo "deb-src http://ppa.launchpad.net/openbravo-isv/ppa/ubuntu lucid main" >> /etc/apt/sources.list'
- Install the openbravo-erp package:
sudo apt-get update sudo apt-get install openbravo-erp
You can also use Synaptic or your favourite package management tool.
This is how you can send us your feedback:
- Add your comments to this blog post.
- Report an issue: https://bugs.launchpad.net/openbravo-erp
- Connect to the #openbravo IRC channel in Freenode.
This beta testing phase will last four weeks, primarily because our intention is to release our package together with the final version of Lucid. This is your opportunity to test the long awaited Ubuntu release together with the new Openbravo ERP package. And why not, also a good chance get involved in the project!
UPDATE (2010/04/09): the beta2 version of Lucid is out, download links updated.
Tagged: EC2, OpenJDK, Packaging, Releases, Ubuntu
Openbravo ERP 2.50 has a time based release plan. This helps us get bug fixes and new features into our user’s hands more quickly and improve our planning process.
Up to now we’ve been releasing Maintenance Packs on a monthly basis. However we have detected some improvement areas, mainly two:
- We want to release like a clock: same time, same day, every month. Everyone gets benefited by this:
- End users appreciate knowing the release dates without even reading a release calendar. They simply know that they’ll have an update at the end of every month.
- Our engineering team can organize more effectively: developers, QA, Support, Release Management.
- Developers should know what commits they push go into what release. This way they can organize their work more efficiently.
So let’s start! We already have the list of desired improvements and we know that we need to set the key dates. So first we need to identify the different kind of events involved in the development process:
- Regular commits (e.g. bugfixes or small features).
- Large project merges (e.g. a new large feature).
- QA automatic tests.
- Feature freeze for the release.
- Release packaging.
- QA manual tests.
- Publish the release.
Now we need to set some dates and deadlines. The QA manual work – bug found – fix it loop lasts around 10 days. So the freeze and packaging should be before the 20th. And we want to provide a guarantee to developers, so that they know which commits go into what release. Here’s a proposal:
- All the commits pushed before the 15th of a month will go into that month’s release. We’ll make sure all the tests are passed, working together with the developers in case there’s a failure.
- We’ll freeze the code the 18th and start the packaging right after this moment. This means we have 3 days to make the code pass all the automatic test suite. It should be more than enough. It also means that commits pushed between the 15th and the 18th might go into that month’s release. If it passes the tests of course.
- From that moment on QA can start the manual work. If they find issues the affected developer will work on the fix and they’ll iterate on this process till QA happy is happy with the result.
- Afterwards the Release Management team can publish the release.
And last but not least, project merges tend to create instabilities in the code main line, compared to having no reintegrations at all. So taking this into account the project merges must be done a specific time frame: from the 22nd of one month until the 5th of the next one. So that we give it enough room (10 days) to get the code stable for the freeze time.
We can see these policies summarized in the following graph:
Improving a release process is usually an evolution, not a revolution. And I believe this is the first step towards better timely releases.
Tagged: Release process, Releases
Cloud Computing services offer a great deal of advantages to end users and sysadmins. I’m sure most of us appreciate not taking care about the hardware, having agility to provision new resources, getting the ability to recover fast from disasters, taking benefit of a very high network bandwidth speed, and so forth. It’s a very good solution for many situations.
However managing a Cloud is different compared to the traditional server management. We have been using Amazon EC2 in Openbravo for the last couple of years, so I would like to share some tips with you with regards to backup management and recovery.
DNS management: Elastic IPs
Making backups is the first step towards a successful system recovery. But in order to take benefit of these backups it is also essential to react in a quick and easy manner when something goes wrong, so that we have the system is back up and running in a matter of minutes. One of the biggest delays tied to the traditional server management comes with the DNS changes. That is, migrating to a new machine usually means having a new IP address. And therefore it forces you to update the DNS records, and this might take days to propagate.
Amazon has a nice feature called Elastic IP that allows you to forget about this. You basically allocate an IP address for you, so that you can assign it to any of your machines “on the fly”.
Think about the difference:
- Traditional DNS management: if a new server gives me a new IP I have to change the DNS records.
- Elastic IP DNS management: I can assign my IP address to any of my instances.
The first option might take several days to be spread in the entire world, while the second one is immediate.
EBS: persistent storage
Amazon EC2 has continuously improved the features they offer. One of the limitations it initially had consisted of the fact that the hard drives where not persistent. That is, if you shut down a machine you lose your data. In September 2008 Amazon introduced the EBS hard drives (Elastic Block Storage), which means persistent storage that essentially works like a normal hard drive that you can attach to any of your instances. However you could not run the entire operating system in one of these EBS drivers. In any case this was a big step forward, given that you could save your critical dynamic data (e.g. a database) in a persistent storage. Starting from December 2009 you can finally run an entire operating system in a EBS unit.
This feature is relatively new so most of the public AMIs are still based “instance-store”, instead of in “EBS-store”. The recommendation here is incremental:
- Make sure you save your critical dynamic data in a EBS drive.
- If possible, run the entire instance in a EBS drive.
I’ve just said that it works similar to a regular hard drive you can attach to any instance. This is not entirely true, it’s better: you can make snapshots of the entire disk. As many as you want, and they are incremental. You can of course restore them into a new drive any time you want.
So as a last tip with EBS do regular snapshots, and make sure you purge the old ones.
There’s a golden rule with regards to backups: “Do not put all the eggs in the same basket”. It is very unlikely for Amazon EC2 to suffer a general downtime or disaster so that you would loose your data (e.g. a fire). They make sure this won’t happen and they do their backup homework as well. In any case it is generally a good idea to have a second recovery plan, physically isolated from your main backup location.
Amazon EC2 currently has two independent regions (US and EU), so the first option is to replicate the backups from region to another. However if a malicious user gets your EC2 credentials they might have temptations to wipe out all your data in both regions. To avoid this, as a recommendation, create an extra EC2/S3 account with complete read-only access to the first account. So that your backups cannot be compromised in that way.
If you are more paranoid than this, you can schedule weekly or biweekly backups to an external storage service provider.
It is sometimes very useful to have a recent backup available with you at your office. One option is to download it. But depending on the size of your backups and your network speed this might be prohibitive. Amazon has a nice feature called Import/Export that covers this need:
- You send a hard drive to them with instructions.
- They load the requested data into the hard drive.
- They send you the hard drive back.
Openbravo ERP and EC2
OK, those tips sounds reasonable. So what should I specifically do with my Openbravo ERP instance in EC2? Stay tunned, a new post will be coming soon covering this topic.
Tagged: Cloud computing, EC2
Give me a place to stand on, and I will move the Earth.
This is a legend ascribed to the famous Archimedes, genius of antiquity. When he understood the basic principles behind the lever he felt its power and he started seeing them in a different way. His paradigm changed.
In a similar way SSH is a tool that can completely change the way you work, for good. Let me show you why you, as a system integrator, should be interested in its wonders. And I can assure you that if you get familiar with the basic principles behind it you’ll be able to perform tasks you would never possibly imagine.
Case 1: Browsing through a remote computer
There are multiple situations where browsing through a remote computer is interesting:
- IP address restrictions: a customer has restricted access so that I can only access a remote machine from my network at work. I am at home and I really need to access this server.
- Content filtering: I am located in a network or a country that restricts my Internet connection. And I really need to access some pages which are key to do my job.
- Remote LAN access: I have access to a remote computer. But I would like to access the ERP or database of another computer in the same local network.
So this is the magic that makes this possible:
ssh -D 8888 johndoe@remote_computer
This basically converts the remote computer into a proxy server only for you. So that you just need to provide this information to your web browser. Using Firefox as an example, you’d need to go to Preferences → Advanced → Network → Settings, then select Manual proxy configuration and finally enter localhost in the SOCKS Host field and 8888 as the port number. The simplest way to verify it’s working is to visit whatismyip.org so that you can verify your IP address, it should be the remote one.
Case 2: Securely connect to a remote database
This is a typical case scenario. The remote machine has only SSH opened and there is no direct access to the database. Let’s suppose it’s PostgreSQL running on port 5432 on the remote computer, but the port is not opened to the outside world, only to local connections. So as have SSH access you can ask it to redirect the remote 5432 port into any port of your local machine, like 5433:
ssh -L 5433:localhost:5432 johndoe@remote_computer
Now you can start psql, pgAdmin or your favorite client and use localhost as the host and 5433 as the port in the connection details.
As for Oracle the concept is the same and just the port numbers change:
ssh -L 1522:localhost:1521 johndoe@remote_computer
Case 3: Expose my local ERP into a remote network
Let’s suppose that I have Openbravo ERP beautifully running in my local machine, it includes some nice new changes we’ve been working on. I would like to show it to Mike and Sandra, but they are located in a remote network. I am in a hotel, there is no way I can ask the IT staff of the hotel to open a port for my users to access the ERP in my computer. SSH comes to the rescue again: basically you can perform the opposite operation of Case 2, and forward your local Web Server port into any port of a remote machine:
ssh -R :9999:localhost:80 johndoe@remote_computer
So now I can ask Mike and Sandra to enter http://local_ip_of_remote_computer:9999 and bingo, they can access my ERP installation.
Important note: for this feature to work the server’s SSH configuration (sshd_config) must have the GatewayPorts option set to yes.
Case 4: Securely connect to a remote database available only in the LAN
Now let’s suppose I have SSH access to remote_computer, but not to remote_computer-2, which is is in the same LAN as the first one. And I want to access the database in remote_computer-2 using my graphical SQL client. There are multiple ways of solving this situation, by using variants of Case 1 or Case 2. We’ll do it extending the first case. Firstly, open the SSH connection and establish the local proxy server:
ssh -D 8888 johndoe@remote_computer
Now we want to tell our PostgreSQL client to use this proxy. But usually they don’t support this feature. So here proxychains comes to the rescue. This is a tool that allows you to make any program use the Internet connection through that proxy. Once it is installed, it requires a minimal configuration in $HOME/.proxychains/proxychains.conf, only required the first time you use it:
DynamicChain tcp_read_time_out 15000 tcp_connect_time_out 10000 [ProxyList] socks5 127.0.0.1 8888
From now on you can prepend the proxychains command to your program and it will go to the Internet using the proxy server connection. So for example in our case we would go a terminal and run:
proxychains psql -d openbravo -U tad -h localhost -p 5433
As you can see SSH opens a new world of possibilities for you. Invest some time playing with it, you won’t regret.
Some final words for Windows users: don’t worry, this is not valid for UNIX based systems only. If you run Windows in your computer you can use PuTTY to achieve exactly the same results.
UPDATE (2010/04/26): adding the GatewayPorts requirement and the corrected ssh command based on Asier’s comments.
Tagged: Security, SSH