Women in Technology

Hear us Roar

  Memory Contention in J2EE Applications for Multiprocessor Platforms
Subject:   JVM: Sun JVM 1.3
Date:   2004-11-15 10:25:04
From:   whirlycott
I would have guessed that 1.4 and 1.5 would have been more relevant, especially 1.5 since it uses a new memory model.

More importantly, do your charts really indicate anything at all? How can we know what affect the additional CPUs are having when there is no comparison given for a single/dual/quad CPU boxes? This doesn't indicate anything more than "under load, things go slower."

I would also have been interested in seeing how different operating systems might handle memory contention issues.

Your suggestion to scale out doesn't address the possibility that you might need to create shared locks across the network when different nodes are trying to get exclusive resource to something, like a networked cache.

There's also a typo in your chart. The last value in column 1 ought to be 7500.

What is "Service Demand" and how is it measured?

Main Topics Newest First

Showing messages 1 through 1 of 1.

  • JVM: Sun JVM 1.3
    2004-11-22 03:20:01  DeepakGoel [View]

    1. Service Demand is the amount of resource spent on a single request. Here we are looking at the CPU service demand which can be arrived with the following formula:

    CPU Service Demand = Utilization
    Number of Request Served/sec

    In the chart 3 we see that the service demand is increasing with increasing load. This is due to the memory contention. Normally for most of the applications the service demand should remain more or less constant with increasing load.

    2. We have tried with 2/4/6/8 CPU's and the behavior is very much the same. The effect of increasing load is that all the processors in the system would have threads running in them simultaneously which will bring about this contention.

    3. We have tried with JVM 1.4 and the behavior is very much the same.

    4. We have also run this on Windows and Solaris platforms. The memory contention remains.