...
The following are typical server sizes for small, medium and large installations. For large installations we recommend that you engage a PhixFlow consultant to verify requirements and analyse data volumes to be processed and retained within PhixFlow. Please note, that backup/recovery capacity is excluded in these sizing estimates.
| Server Size | ||
---|---|---|---|
Small | Medium | Large | |
Daily records | 10m | 70m | 200m |
Application Server | |||
CPU cores | 4 | 12 | 24 |
Memory | 16 GB | 32 GB | 64 GB |
Disk space1 | 50 GB | 100 GB | 200 GB |
Database Server | |||
CPU cores | 2 | 6 | 12 |
Memory | 4 GB | 8 GB | 16 GB |
Data disk space2 | 250 GB | 2 TB | 4 TB |
Redo/Undo space | 40 GB | 400 GB | 800 GB |
1 Note that the disk space on the applications server is sized assuming that there may be files placed on the server disk for PhixFlow to read. If no files are to be placed on the application server (e.g. PhixFlow will read files from an existing location) then only a small disk is required for the operating system and PhixFlow application software.
...
For medium and large implementations the database should be configured for high performance throughput. In particular database performance is significantly affected by the way that the database server disks are configured. Organisations requiring this size of server will generally have their own database administrators who are familiar with this level of planning, however, for clarity the following shows recommendations when setting up a medium installation:
Disk Group 1 | 8 x 300 GB in RAID5 configuration (Used for Oracle Data Tablespace) Format the stripe with a block size that is optimal for data throughput. (typically 4Kb) |
Disk Group 2 | 4 x 146 GB in RAID 1+0 configuration (Used for Oracle Redo) Format the mirror with a block size of 512 bytes. 2 redo groups multiplexed |
Disk Group 3 | 2 * 146 GB in RAID 1+0 configuration (Used for Oracle Undo) Format the mirror with 4Kb block size |
Linux: limit on open file descriptors
On linux a limit can be imposed on the number of open file descriptors a user can have. You can see the current limit by running the ulimit command:
> ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 3889
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 3889
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
{
The limit is given by the setting open files, in the above example 1,024. This is a common default on linux.
Ensure that the user that will run tomcat has a limit