Sizing guidelines for AWS EC2 instance types


Are there any guidelines for what can be expected from each EC2 instance type option in terms of what size / type / performance SLAs of a user community they would support?  I know this is a broad question, but it's hard to gauge whether our 50 consumer / 10 analyst /  3 power user / 2 developer community could reasonably fit on a m3.large?  Are there any rules of thumb in this regard?

Also it's understood we could turn off the instance when not needed to save money (off hours), but are there any considerations for having a dev/test/prod env?  I would expect we'd need an instance for each, then turn on/off as needed.

I'm trying to come up with cost estimates to have a full system in place for usage by X/Y/Z users of each typical BI role, so having a guide to do this would be excellent.  Anyone else done this?

paul_maric's picture
Joined: Dec 9 2016 - 2:37pm
Last seen: 3 years 19 hours ago

I can't really speak to this, but size is as much about amount of data (for example, a thousand records vs. a million records), number of parameters, complexity of the report.  And also how frequently reports are accessed.  So that information would be useful for someone who can tell you.

elizam - 3 years 18 hours ago

1 Answer:


Wow, lots of questions :)

#1) For performance related questions this wiki page is always a good start

That will give you an overview on where the bottlenecks will be for a specific use case. Since there are so many moving parts (query time, fetch time, report execution, fill and export, etc) is not easy as you need to use instance X for X amount of users.

Note that in AWS new generation instance types (like m4) have more processing power per vCPU than older generations (m2) for me the bible is this site note the difference in ECU.  One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.

The server is loaded by concurrent report executions. i.e. a user looking at an already executed report will have minimal impact, while a user generating a dashboard with 4 dashlets will be executing 4 reports simultaneously.

As a basic rule of the thumb, you can use the 100:10:1 rule; for each 100 named users you can expect 10 concurrent users from wich you can expect 1 concurrent report execution at any given time. Of course, this is also broad, you should tailor that to your use case.
Based on the little info I have I think an m3.large will become memory bound in your case since you only have 7.5GB available and I'm assuming you are running the regular AMI where JRS and the repo DB are in the same box. 
Look into the Cloud Formation Templates for clustering here that show Launching Jaspersoft with the repository DB on RDS. Since this will offload some resources from the AMI. I do recommend for convenience using the cluster deployment even if you use only one machine :)

Regarding Dev & Tes, you are correct you should create CF templates for your environments and spin those off and on when needed. Use the ones in the link above as a reference. Is much better to automate those things, AWS have really good API's for managing this automation

Hope this helps :)

marianol's picture
Joined: Sep 13 2011 - 8:04am
Last seen: 1 year 2 months ago