I have spent a lot of time working in Exalytics environments, and while they are incredibly fast, I have always been curious if there are ways to squeeze a little bit better performance from such a super charged server. Specifically, if it’s possible to make Essbase on Exalytics even faster. Well, I recently found a way to do so.
I have been working with a very large, sparsely populated BSO Planning application. This application has 5 sparse aggregating dimensions. At the moment, the application is broken out into 11-page files totaling 20.5 GB of data. This made aggregating the data a timely process. We have switched the aggregations to a scheduled process, so users don’t have to wait for aggregations on save.
Why is it slow?
Obviously, this is a large application and aggregations tend to take a while on large applications, but where is the pinch point? I studied the server statistics during the aggregation process and realized that the slowest part of the process is the read/writes to the hard drive. Exalytics has flash drives so this process is already faster than most other servers, but since this is the slow point, I still wanted to see if it could be faster.
How can you make the disc faster?
In researching something else, I ran across the concept of re-assigning RAM as a temp drive on linux-based servers. With a simple shell command, you can re-assign some of the RAM on a server to function as if it were a hard drive. This is apparently commonplace for applications that are heavily reliant on temporary files to run. The downside is these drives are wiped out every time a server is restarted.
The Exalytics server I was working on already had a temporary drive with around 500 GB of space on it. So I decided to try it. I changed the location of the database’s page and index file and reloaded the data. The results were instantly apparent. The partial aggregation time decreased from ~415 seconds down to ~130 seconds. That is over 3x performance improvement by mounting the application on RAM as opposed to flash storage. Unfortunately, this still didn’t improve performance enough to run on save. However, it was encouraging to see that there are ways to push the performance even farther than what is given out of the box.
As I mentioned before, these drives do get deleted on server restart, so if you intend to use this, you do need to be careful. There should be a backup process in place at regular intervals as well as a backup/restore process if/when the server ever gets restarted. Theoretically, it should never need to be, but things happen. Implementing a backup process that utilizes the transaction logging could help restore applications to the exact moment of a crash. The other thing to watch for is how much RAM you are using. It is going be detrimental overall if you take RAM for a disk space, but end up putting a memory constraint on the server.
The other thing to note is that I did this on a BSO application. Based on other blogs I have read, and general knowledge of ASO cubes, I do not think it would have the same affect on an ASO application. Since those applications are more RAM based anyway, mounting the files on RAM likely wouldn’t change much, but it could be worth a try.
Overall, this is a relatively easy way to improve performance on a BSO application. If you are able to do something like this and secure yourself against unexpected crashes that could result in data loss, this might be something for you to try.
Questions? Comments? Feel free to reach me at:
David Grande, firstname.lastname@example.org