Saturday, June 16, 2012

Verbose garbage collection

When you turn the verbose garbage collection on, WAS will start printing garbage collection related information in your native_stderr.log file. You can turn the vebose garbage collection on using WAS Admin Console by going to Application servers < servername servers < Process Definition servers < Java Virtual Machine. Once you enable the verbose garbage collection on the WAS server will start writing messages into native_stderr.log file every time it executes garbage collection.

These are couple of entries from my native_stderr.log file.


<af type="tenured" id="49" timestamp="Jul 05 13:13:17 2009" intervalms="32.872">
<minimum requested_bytes="16776" />
<time exclusiveaccessms="0.044" />
<tenured freebytes="3624584" totalbytes="116873216" percent="3" >
<soa freebytes="3624584" totalbytes="116873216" percent="3" />
<loa freebytes="0" totalbytes="0" percent="0" />
</tenured>
<gc type="global" id="49" totalid="49" intervalms="34.359">
<refs_cleared soft="0" threshold="32" weak="0" phantom="0" />
<finalization objectsqueued="0" />
<timesms mark="107.399" sweep="1.583" compact="0.000" total="109.117" />
<tenured freebytes="43894704" totalbytes="116873216" percent="37" >
<soa freebytes="43894704" totalbytes="116873216" percent="37" />
<loa freebytes="0" totalbytes="0" percent="0" />
</tenured>
</gc>
<tenured freebytes="43877928" totalbytes="116873216" percent="37" >
<soa freebytes="43877928" totalbytes="116873216" percent="37" />
<loa freebytes="0" totalbytes="0" percent="0" />
</tenured>
<time totalms="110.648" />
</af>

<af type="tenured" id="50" timestamp="Jul 05 13:13:18 2009" intervalms="232.708">
<minimum requested_bytes="32" />
<time exclusiveaccessms="0.031" />
<tenured freebytes="0" totalbytes="116873216" percent="0" >
<soa freebytes="0" totalbytes="116873216" percent="0" />
<loa freebytes="0" totalbytes="0" percent="0" />
</tenured>
<gc type="global" id="50" totalid="50" intervalms="234.140">
<classloadersunloaded count="13" timetakenms="48.694" />
<expansion type="tenured" amount="19868672" newsize="136741888" timetaken="0.152" reason="excessive time being spent in gc" gctimepercent="49" />
<refs_cleared soft="0" threshold="32" weak="3" phantom="0" />
<finalization objectsqueued="0" />
<timesms mark="118.982" sweep="2.639" compact="0.000" total="170.899" />
<tenured freebytes="55392816" totalbytes="136741888" percent="40" >
<soa freebytes="55392816" totalbytes="136741888" percent="40" />
<loa freebytes="0" totalbytes="0" percent="0" />
</tenured>
</gc>
<tenured freebytes="55392176" totalbytes="136741888" percent="40" >
<soa freebytes="55392176" totalbytes="136741888" percent="40" />
<loa freebytes="0" totalbytes="0" percent="0" />
</tenured>
<time totalms="172.525" />
</af>


The way IBM JDK works is if it is not able to allocate a memory then it will execute garbage collection to free up the memory. The J9 VM used in WAS 6.1 generates one <af> element every time a garbage collection works.

The af element has following elements

  • type:

  • id: The id represents how many times the gc was executed

  • intervalms: The time in ms since last time gc was executed

  • timestamp: time of gc


The minimum represents the number of bytes that were requested and JVM couldnot allocate them so it had to trigger garbage collection cycle.

The af element has 3 main child elements first tenured element has data about the tenured memory position before gc then gc element represents data about what happened during gc, such as time spent in mark, sweep and compact phases, The second tenured element represents the position of tenured memory after gc.


The IBM Support assistance has IBM Pattern modeling and Analysis tool for Java Garbage collection tool that can be used to analyze the garbage collection.

WebSphere Log Files /Logging performance data

Plug-In Logs
WebServer http Plugin will create log, by default named as http-plugin.log, placed under PLUGIN_HOME/logs/
Plugin writes Error messages into this log. The attribute which deals with this is
< Log > in the plugin-cfg.xml
For Example
< Log LogLevel=”Error” Name=”/opt/IBM/WebSphere/Plugins/logs/http_plugin.log” / >
To Enable Tracing set Log LogLevel to “Trace”.
< Log LogLevel=”Trace” Name=”/opt/IBM/WebSphere/Plugins/logs/http_plugin.log” / >
JVM logs
$ find /opt/IBM/WebSphere/ -name SystemOut.log -print
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Node/logs/member1/SystemOut.log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Node/logs/member2/SystemOut.log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Node/logs/nodeagent/SystemOut.log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Dmgr/logs/Dmgr/SystemOut.log
NodeAgent Process Log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Node/logs/nodeagent/native_stdout.log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Node/logs/nodeagent/native_stderr.log
IBM service logs – activity.log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Node/logs/activity.log
/opt/IBM/WebSphere/AppServer/profiles/%Profile%/Dmgr/logs/activity.log
——————————————————————————–
Enabling automated heap dump generation, DONT DO THIS IN PRODUCTION
  1. Click Servers > Application servers in the administrative console navigation tree.
  2. Click server_name >Performance and Diagnostic Advisor Configuration.
  3. Click the Runtime tab.
  4. Select the Enable automatic heap dump collection check box.
  5. Click OK
Locating and analyzing heap dumps
Goto profile_root\myProfile. IBM heap dump files are usually named as heapdump*.phd
Download and use tools like heapAnalyzer, dumpanalyzer
——————————————————————————–
Logging performance data with TPV(Tivoli Performance Viewer)
    1. Click Monitoring and Tuning > Performance Viewer > Current Activity > server_name > Settings > Log in the console navigation tree. To see the Log link on the Tivoli Performance Viewer page, expand the Settings node of the TPV navigation tree on the left side of the page. After clicking Log, the TPV log settings are displayed on the right side of the page.
    2. Click on Start Logging when viewing summary reports or performance modules.
    3. When finished, click Stop Logging . Once started, logging stops when the logging duration expires, Stop Logging is clicked, or the file size and number limits are reached. To adjust the settings, see step 1.
    By default, the log files are stored in the profile_root/logs/tpv directory on the node on which the server is running. TPV automatically compresses the log file when it finishes writing to it to conserve space. At this point, there must only be a single log file in each .zip file and it must have the same name as the .zip file.
  • View logs.
    1. Click Monitoring and Tuning > Performance Viewer > View Logs in the console navigation tree.
    2. Select a log file to view using either of the following options:
      Explicit Path to Log File
      Choose a log file from the machine on which the browser is currently running. Use this option if you have created a log file and transferred it to your system. Click Browse to open a file browser on the local machine and select the log file to upload.
      Server File
      Specify the path of a log file on the server.In a stand-alone application server environment, type in the path to the log file. The profile_root\logs\tpv directory is the default on a Windows system.
    3. Click View Log. The log is displayed with log control buttons at the top of the view.
    4. Adjust the log view as needed. Buttons available for log view adjustment are described below. By default, the data replays at the Refresh Rate specified in the user settings. You can choose one of the Fast Forward modes to play data at rate faster than the refresh rate.
      Rewind Returns to the beginning of the log file.
      Stop Stops the log at its current location.
      Play Begins playing the log from its current location.
      Fast Forward Loads the next data point every three (3) seconds.
      Fast Forward 2 Loads ten data points every three (3) seconds.
    You can view multiple logs at a time. After a log has been loaded, return to the View Logs panel to see a list of available logs. At this point, you can load another log.
    TPV automatically compresses the log file when finishes writing it. The log does not need to be decompressed before viewing it, though TPV can view logs that have been decompressed.

JVM Logs

The JVM Logs are the SystemOut.log and SystemErr.log files stored in each App Server, Node agent, and Deployment Manager profile. They contain messages sent to the Java System.out and System.err streams.
Location:
/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs/server1/SystemOut.log
/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs/nodeagent/SystemOut.log
Where:
  • AppSrv01 is the App Server profile name.
  • server1 is the actual server name.
  • nodeagent is the actual node agent name for that profile.
Example event:
[2/17/12 13:52:00:440 PST] 00000057 CoordinatorCo W   HMGR0152W: CPU Starvation detected. Current thread scheduling delay is 10 seconds.

Time stamp, ThreadId, Message shortname, EventType, MessageId: Message

Native (Process) Logs

The native_stdout.log or native_stderr.log files are stored in each App Server, Node agent, and Deployment Manager profile. They contain messages sent to stdout and stderr from native code including the JVM.
File location: /opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs/server1/native_stdout.log
/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs/nodeagent/native_stderr.log
where:
  • AppSrv01 is the App Server profile name.
  • server1 is the actual server name.
  • nodeagent is the actual node agent name for that profile.

Other server specific log files

All log files stored as part of a profile in individual server directories have a similar structure to the JVM logs. An example of this is startServer.log.These files contain information about specific activities such as starting and stopping servers, adding nodes and so on.

wsadmin.traceout

These files contain data for each wsadmin session and the content of the files is refreshed each time a new wsadmin session is created. They are stored in the Profile's log directory (Structure is similar to the JVM logs)
Dmgr01 is the Deployment Manager's profile name.
/opt/IBM/WebSphere/AppServer/profiles/Dmgr01/logs/wsadmin.traceout 

FFDC logs

First Fail Data Capture logs contain information generated from a processing failure. The files are stored in the ffdc directory under the Profile's log directory.
Dmgr01 is the Deployment Manager's profile name.
/opt/IBM/WebSphere/AppServer/profiles/Dmgr01/logs/ffdc/dmgr_cd30cd3_12.02.09_23.17.18.1876809.txt

Optional files

If you are a sophisticated Splunk user you can customize the Splunk App for WAS and create views and dashboards to look at the data in the files listed below. You can search and index these files using Splunk and do some basic field extractions on them. There are no out-of-the-box views that display this data.
Filename Location
javacore*.txt /opt/IBM/WebSphere/AppServer/profiles/Dmgr01/javacore*.txt
activity.log /opt/IBM/WebSphere/AppServer/profiles/AppSrv01/logs/activity.log
Server Exception log /opt/IBM/WebSphere/AppServer/profiles/Dmgr01/logs/ffdc/dmgr_exception.log
App logs stored in the Profile's Log folders

Latency Vs Bandwidth

One of the most commonly misunderstood concepts in networking is speed and capacity. Most people believe that capacity and speed are the same thing. For example, it's common to hear "How fast is your connection?" Invariably, the answer will be "640K", "1.5M" or something similar. These answers are actually referring to the bandwidth or capacity of the service, not speed.

Speed and bandwidth are interdependent. The combination of latency and bandwidth gives users the perception of how quickly a webpage loads or a file is transferred. It doesn't help that broadband providers keep saying "get high speed access" when they probably should be saying "get high capacity access". Notice the term "Broadband" - it refers to how wide the pipe is, not how fast.

Latency:

Latency is delay.

For our purposes, it is the amount of time it takes a packet to travel from source to destination. Together, latency and bandwidth define the speed and capacity of a network.

Latency is normally expressed in milliseconds. One of the most common methods to measure latency is the utility ping. A small packet of data, typically 32 bytes, is sent to a host and the RTT (round-trip time, time it takes for the packet to leave the source host, travel to the destination host and return back to the source host) is measured.

The following are typical latencies as reported by others of popular circuits type to the first hop. Please remember however that latency on the Internet is also affected by routing that an ISP may perform (ie, if your data packet has to travel further, latencies increase).

Ethernet                  .3ms
Analog Modem              100-200ms
ISDN                      15-30ms
DSL/Cable                 10-20ms
Stationary Satellite      >500ms, mostly due to high orbital elevation
DS1/T1                    2-5ms


Bandwidth:

Bandwidth is normally expressed in bits per second. It's the amount of data that can be transferred during a second.

Solving bandwidth is easier than solving latency. To solve bandwidth, more pipes are added. For example, in early analog modems it was possible to increase bandwidth by bonding two or more modems. In fact, ISDN achieves 128K of bandwidth by bonding two 64K channels using a datalink protocol called multilink-ppp.

Bandwidth and latency are connected. If the bandwidth is saturated then congestion occurs and latency is increased. However, if the bandwidth of a circuit is not at peak, the latency will not decrease. Bandwidth can always be increased but latency cannot be decreased. Latency is the function of the electrical characteristics of the circuit.

WebSphere Websphere Application Server Log files and its location . IBM WAS log files & Path

BM Websphere Application Server creates the following log files trace.log ,SystemOut.log , and SystemErr.log , activity.log, StartServer.log , stopServer.log , native_stdout.log , native_stderr.log. Let us see the above log files in details .
1. Diagnostic Trace Service logs - Contains all output of System.out and System.err streams of the JVM for the application server process and other details. Can be used for the Diagnostic purpose. The default file used to capture the logs is trace.log . Location (given below) and Name of the file can be changed.
2. Java virtual machine (JVM) logs : Contains Standard JVM output and error log. The JVM logs are created by redirecting the System.out and System.err streams of the JVM to independent log files SystemOut.log and SystemErr.log respectively. Thease files contain the output of the System.out and System.err streams for the application server process. The data is written by the user program using the statement System.out.println(), System.err.print() and calling a JVM function, such as Exception.printStackTrace(). The System.out JVM log also contains system messages( message events) written by the application server. The default file names (SystemOut.log and SystemErr.log) and location (given below ) can be changed.
3. Process Logs : WAS processes contain two output streams stdout and stderr which are accessible to native code running in the process. Native code, including JVM, might write data to these process streams. In addition, System.out and System.err streams of JVM can be configured to write their data to these streams also. The default files used for writing this logs are native_stdout.log , native_stderr.log
4. IBM Service Logs : The default file used for this logs is activity.log . Maintains a history of activities of the Websphere Application Server. The IBM Service log is maintained in a binary format can be viewed by Log and Trace Analyzer.
5. StartServer.log and stopServer.log : Logs generated during application Server's start & stop process are captured in these log files.
The default location for storing all log files except activity.log of WebSphere Application Server is as follows
For stand alone server :
Linux: /opt/IBM/WebSphere/AppServer/profiles/default/logs/server1
Windows: drive:\Program Files\IBM\WebSphere\AppServer\profiles\default\logs\server1 where default is the profile name and server1 is servername.
For a managed node :
Linux : /opt/IBM/WebSphere/AppServer/profiles/AppSrv01_nodename/logs/nodename
Windows:
drive:\Program Files\IBM\WebSphere\AppServer\profiles\AppSrv01_nodename\logs\nodename


Location for storing activity.log in stand alone server : in Linux /opt/IBM/WebSphere/AppServer/profiles/default/logs , in Windows : drive:\Program Files\IBM\WebSphere\AppServer\profiles\default\logs
Note :The default size for all log files except activity.log file is 1 mb. For activity.log , default size is 2 mb. can be changed . The location , file size , name can be changed using IBM admin console .

Friday, June 15, 2012

Java – Threading & Synchronization Issues

Of the many issues affecting the performance of Java/.NET applications, synchronization ranks near the top.  Issues arising from synchronization are often hard to recognize and their impact on performance can be become significant. What’s more, they are often, at least in principle, avoidable.
The fundamental need to synchronize lies with Java’s support for concurrency. This is implemented by allowing the execution of code by separate threads within the same process. Separate threads can share the same resources, objects in memory. While being a very efficient way to get more work done (while one thread waits for an IO operation to complete, another thread gets the CPU to run a computation), the application is also exposed to interference and consistency problems.
The JVM/CLR does not guarantee an execution order of the code running in concurrent threads. If multiple threads reference the same object there is no telling what state that object will be in at a given moment in time. The repercussions of that simple fact can be enormous with, for example, one thread running calculations and returning wrong results because a concurrent thread accessing and modifying shared bits of information at the same time.
To prevent such a scenario (a program needs to execute correctly, after all) a programmer uses the “synchronize” keyword in his/her program to force order on concurrent thread execution. Using “synchronize” prevents threads from obtaining the same object at the same time.
In practice, however, this simple mechanism comes with substantial side effects. Modern business applications are typically highly multi-threaded. Many threads execute concurrently, and consequently “contend” heavily for shared objects. Contention occurs when a thread wants to access a synchronized object that is already held by another thread. All threads contending effectively “block,” halting their execution until they can acquire the object. Synchronization effectively forces concurrent processing back into sequential execution.
With just a few metrics we can show the effects of synchronization on an application’s performance. For instance, take a look at the graph below.


While increasing load (number of users = blue), we see that at some point midway the response time (yellow) takes an upward curve, while at the same time resource usage (CPU = red) somewhat increases to eventually plateau and even recedes. It almost looks like the application runs with the “handbrake on,” a classic, albeit high-level, symptom of an application that has been “over-synchronized.”
With every new version of the JVM/CLR improvements are made to mitigate this issue. However, while helpful, these improvements can’t fully resolve the issue and address the application’s negative performance.
Also, developers have come to adopt “defensive” coding practices, synchronizing large pieces of code to prevent possible problems. In large development organizations this problem is further magnified as no one developer or team has full ownership of an application’s entire code base. The practice to err on the side of safety can quickly exacerbate with large portions of synchronized code significantly impacting the performance of an application’s potential throughput.
It is often too arduous a task to maintain a locking strategy fine grained enough to ensure that only the necessary minimum of execution paths are synchronized. New approaches to better manage state in a concurrent environment are available in newer versions of Java such as readWriteLocks, but they are not widely adopted yet.  These approaches promise a higher degree of concurrency, but it will always be up to the developer to implement and use the mechanism correctly.
Is synchronization, then, always going to result in a high MTTR?
New technologies exist on the horizon that may lend some relief.  Software Transactional Memory Systems (STM), for example, might become a powerful weapon for dealing with synchronization issues. They may not be ready for prime time yet, but given what we’ve seen with database systems, they might be the key to taming the concurrency challenges affecting applications today. Check out JVSTM, Multiverse and Clojure for examples of STMs.
For now, the best development organizations are the ones that can walk the fine line of balancing code review/rewrite burdens and concessions to performance. APM tools can help quite a lot in such scenarios, allowing to monitor application execution under high load (aka “in production”) and quickly pinpoint to the execution times for particular highly contended Objects, Database connections being a prime example. With the right APM in place, the ability to identify thread synchronization issues become greatly increased—and the overall MTTR will drop dramatically.


Java IO: System.in, System.out, and System.error

The 3 streams System.in, System.out, and System.err are also common sources or destinations of data. Most commonly used is probably System.out for writing output to the console from console programs.
These 3 streams are initialized by the Java runtime when a JVM starts up, so you don't have to instantiate any streams yourself (although you can exchange them at runtime).

System.in

System.in is an InputStream which is typically connected to keyboard input of console programs. System.in is not used as often since data is commonly passed to a command line Java application via command line arguments, or configuration files. In applications with GUI the input to the application is given via the GUI. This is a separate input mechanism from Java IO.

System.out

System.out is a PrintStream. System.out normally outputs the data you write to it to the console. This is often used from console-only programs like command line tools. This is also often used to print debug statements of from a program (though it may arguably not be the best way to get debug info out of a program).

System.err

System.err is a PrintStream. System.err works like System.out except it is normally only used to output error texts. Some programs (like Eclipse) will show the output to System.err in red text, to make it more obvious that it is error text.

Simple System.out + System.err Example:

Here is a simple example that uses System.out and System.err:
try {
  InputStream input = new FileInputStream("c:\\data\\...");
  System.out.println("File opened...");

} catch (IOException e){
  System.err.println("File opening failed:");
  e.printStackTrace();
}

Exchanging System Streams

Even if the 3 System streams are static members of the java.lang.System class, and are pre-instantiated at JVM startup, you can change what streams to use for each of them. Just set a new InputStream for System.in or a new OutputStream for System.out or System.err, and all further data will be read / written to the new stream.
To set a new System stream, use one of th emethods System.setIn(), System.setOut() or System.setErr(). Here is a simple example:
OutputStream output = new FileOutputStream("c:\\data\\system.out.txt");
PrintStream printOut = new PrintStream(output);

System.setOut(printOut);
Now all data written to System.out should be redirected into the file "c:\\data\\system.out.txt". Keep in mind though, that you should make sure to flush System.out and close the file before the JVM shuts down, to be sure that all data written to System.out is actually flushed to the file.

Friday, June 8, 2012

15 Practical Grep Command Examples In Linux / UNIX

You should get a grip on the Linux grep command.

This is part of the on-going 15 Examples series, where 15 detailed examples will be provided for a specific command or functionality.  Earlier we discussed 15 practical examples for Linux find command,  Linux command line history and mysqladmin command.


In this article let us review 15 practical examples of Linux grep command that will be very useful to both newbies and experts.


First create the following demo_file that will be used in the examples below to demonstrate grep command.

$ cat demo_file
THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE.
this line is the 1st lower case line in this file.
This Line Has All Its First Character Of The Word With Upper Case.

Two lines above this line is empty.
And this is the last line.

1. Search for the given string in a single file

The basic usage of grep command is to search for a specific string in the specified file as shown below.

Syntax:
grep "literal_string" filename

$ grep "this" demo_file
this line is the 1st lower case line in this file.
Two lines above this line is empty.

2. Checking for the given string in multiple files.

Syntax:
grep "string" FILE_PATTERN


This is also a basic usage of grep command. For this example, let us copy the demo_file to demo_file1. The grep output will also include the file name in front of the line that matched the specific pattern as shown below. When the Linux shell sees the meta character, it does the expansion and gives all the files as input to grep.

$ cp demo_file demo_file1

$ grep "this" demo_*
demo_file:this line is the 1st lower case line in this file.
demo_file:Two lines above this line is empty.
demo_file:And this is the last line.
demo_file1:this line is the 1st lower case line in this file.
demo_file1:Two lines above this line is empty.
demo_file1:And this is the last line.

3. Case insensitive search using grep -i

Syntax:
grep -i "string" FILE


This is also a basic usage of the grep. This searches for the given string/pattern case insensitively. So it matches all the words such as “the”, “THE” and “The” case insensitively as shown below.

$ grep -i "the" demo_file
THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE.
this line is the 1st lower case line in this file.
This Line Has All Its First Character Of The Word With Upper Case.
And this is the last line.

4. Match regular expression in files

Syntax:
grep "REGEX" filename


This is a very powerful feature, if you can use use regular expression effectively. In the following example, it searches for all the pattern that starts with “lines” and ends with “empty” with anything in-between. i.e To search “lines[anything in-between]empty” in the demo_file.

$ grep "lines.*empty" demo_file
Two lines above this line is empty.

From documentation of grep: A regular expression may be followed by one of several repetition operators:

    ? The preceding item is optional and matched at most once.
    * The preceding item will be matched zero or more times.
    + The preceding item will be matched one or more times.
    {n} The preceding item is matched exactly n times.
    {n,} The preceding item is matched n or more times.
    {,m} The preceding item is matched at most m times.
    {n,m} The preceding item is matched at least n times, but not more than m times.

5. Checking for full words, not for sub-strings using grep -w

If you want to search for a word, and to avoid it to match the substrings use -w option. Just doing out a normal search will show out all the lines.

The following example is the regular grep where it is searching for “is”. When you search for “is”, without any option it will show out “is”, “his”, “this” and everything which has the substring “is”.

$ grep -i "is" demo_file
THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE.
this line is the 1st lower case line in this file.
This Line Has All Its First Character Of The Word With Upper Case.
Two lines above this line is empty.
And this is the last line.


The following example is the WORD grep where it is searching only for the word “is”. Please note that this output does not contain the line “This Line Has All Its First Character Of The Word With Upper Case”, even though “is” is there in the “This”, as the following is looking only for the word “is” and not for “this”.

$ grep -iw "is" demo_file
THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE.
this line is the 1st lower case line in this file.
Two lines above this line is empty.
And this is the last line.

6. Displaying lines before/after/around the match using grep -A, -B and -C

When doing a grep on a huge file, it may be useful to see some lines after the match. You might feel handy if grep can show you not only the matching lines but also the lines after/before/around the match.


Please create the following demo_text file for this example.

$ cat demo_text
4. Vim Word Navigation

You may want to do several navigation in relation to the words, such as:

 * e - go to the end of the current word.
 * E - go to the end of the current WORD.
 * b - go to the previous (before) word.
 * B - go to the previous (before) WORD.
 * w - go to the next word.
 * W - go to the next WORD.

WORD - WORD consists of a sequence of non-blank characters, separated with white space.
word - word consists of a sequence of letters, digits and underscores.

Example to show the difference between WORD and word

 * 192.168.1.1 - single WORD
 * 192.168.1.1 - seven words.

6.1 Display N lines after match

-A is the option which prints the specified N lines after the match as shown below.

Syntax:
grep -A <N> "string" FILENAME


The following example prints the matched line, along with the 3 lines after it.

$ grep -A 3 -i "example" demo_text
Example to show the difference between WORD and word

* 192.168.1.1 - single WORD
* 192.168.1.1 - seven words.

6.2 Display N lines before match

-B is the option which prints the specified N lines before the match.

Syntax:
grep -B <N> "string" FILENAME


When you had option to show the N lines after match, you have the -B option for the opposite.

$ grep -B 2 "single WORD" demo_text
Example to show the difference between WORD and word

* 192.168.1.1 - single WORD

6.3 Display N lines around match

-C is the option which prints the specified N lines before the match. In some occasion you might want the match to be appeared with the lines from both the side. This options shows N lines in both the side(before & after) of match.

$ grep -C 2 "Example" demo_text
word - word consists of a sequence of letters, digits and underscores.

Example to show the difference between WORD and word

* 192.168.1.1 - single WORD

7. Highlighting the search using GREP_OPTIONS

As grep prints out lines from the file by the pattern / string you had given, if you wanted it to highlight which part matches the line, then you need to follow the following way.

When you do the following export you will get the highlighting of the matched searches. In the following example, it will highlight all the this when you set the GREP_OPTIONS environment variable as shown below.

$ export GREP_OPTIONS='--color=auto' GREP_COLOR='100;8'

$ grep this demo_file
this line is the 1st lower case line in this file.
Two lines above this line is empty.
And this is the last line.

8. Searching in all files recursively using grep -r

When you want to search in all the files under the current directory and its sub directory. -r option is the one which you need to use. The following example will look for the string “ramesh” in all the files in the current directory and all it’s subdirectory.

$ grep -r "ramesh" *

9. Invert match using grep -v

You had different options to show the lines matched, to show the lines before match, and to show the lines after match, and to highlight match. So definitely You’d also want the option -v to do invert match.

When you want to display the lines which does not matches the given string/pattern, use the option -v as shown below. This example will display all the lines that did not match the word “go”.

$ grep -v "go" demo_text
4. Vim Word Navigation

You may want to do several navigation in relation to the words, such as:

WORD - WORD consists of a sequence of non-blank characters, separated with white space.
word - word consists of a sequence of letters, digits and underscores.

Example to show the difference between WORD and word

* 192.168.1.1 - single WORD
* 192.168.1.1 - seven words.

10. display the lines which does not matches all the given pattern.

Syntax:
grep -v -e "pattern" -e "pattern"

$ cat test-file.txt
a
b
c
d

$ grep -v -e "a" -e "b" -e "c" test-file.txt
d

11. Counting the number of matches using grep -c

When you want to count that how many lines matches the given pattern/string, then use the option -c.

Syntax:
grep -c "pattern" filename

$ grep -c "go" demo_text
6


When you want do find out how many lines matches the pattern

$ grep -c this demo_file
3


When you want do find out how many lines that does not match the pattern

$ grep -v -c this demo_file
4

12. Display only the file names which matches the given pattern using grep -l

If you want the grep to show out only the file names which matched the given pattern, use the -l (lower-case L) option.

When you give multiple files to the grep as input, it displays the names of file which contains the text that matches the pattern, will be very handy when you try to find some notes in your whole directory structure.

$ grep -l this demo_*
demo_file
demo_file1

13. Show only the matched string

By default grep will show the line which matches the given pattern/string, but if you want the grep to show out only the matched string of the pattern then use the -o option.

It might not be that much useful when you give the string straight forward. But it becomes very useful when you give a regex pattern and trying to see what it matches as

$ grep -o "is.*line" demo_file
is line is the 1st lower case line
is line
is is the last line

14. Show the position of match in the line

When you want grep to show the position where it matches the pattern in the file, use the following options as

Syntax:
grep -o -b "pattern" file

$ cat temp-file.txt
12345
12345

$ grep -o -b "3" temp-file.txt
2:3
8:3


Note: The output of the grep command above is not the position in the line, it is byte offset of the whole file.
15. Show line number while displaying the output using grep -n

To show the line number of file with the line matched. It does 1-based line numbering for each file. Use -n option to utilize this feature.

$ grep -n "go" demo_text
5: * e - go to the end of the current word.
6: * E - go to the end of the current WORD.
7: * b - go to the previous (before) word.
8: * B - go to the previous (before) WORD.
9: * w - go to the next word.
10: * W - go to the next WORD.

Host in Resource Failure in Performance Center

Problem description: The host is in Resource Failure status.
Troubleshooting
Option 1: Check the host connections
Check the connections between your project’s hosts and the machines
within your system using the Check Hosts operation in the Hosts page of
the Administration Site (Resources > Hosts > Check Hosts) or User Site
(Project > Hosts > Check Hosts).

 
If Ping to Host fails:
➤ Make sure the host is up and running, and is connected to the network.
Check the routing table (netstat –r) and make sure that requests to this
host are properly routed.
➤ Make sure that the host’s IP address can be properly resolved.
➤ Verify that the ping to the target host from the Performance Center Web
server, utility server, and database is below 20 ms.
➤ If your firewalls or hosts ignore ICMP requests (pings), use HTTP requests
to validate response times from the host to the Web server (a simple
LoadRunner web_url(…) request to http://<server>/loadtest/).
Alternatively, open a browser and type http://<server>/loadtest in the
address field.

If the file server fails:
➤ Make sure that the host can ping the file server.
➤ Make sure that the Performance Center system user has access to LRFS
share on the file server. You can verify this in any of the following ways:
➤ In the command line, execute the following:

net use \\fileserver\LRFS / user:
<Performance_Center_user><Performance_Center_user_password>

➤ Log in to the host machine as the Performance Center system user, and
in the command line, execute the following:

% net use \\fileserver\LRFS

If the above operation fails, check the error message and resolve the
problem. Contact your Windows administrator for assistance.
➤ Make sure that the security settings on LRFS share allow the HP
Performance Center system user full control.
➤ Make sure that the Performance Center system user can create, update,
and delete files from LRFS share.
➤ Make sure that the Performance Center Web server can launch
applications on the host
➤ If the database fails:
➤ Check the <HP Performance Center>/bin/globals.ini file and make sure
that the connection string is correct for the database you are using.
➤ See the troubleshooting for “Login to Oracle Database Server Hangs” on
page 124.
➤ Verify ADODB connectivity to the database 
Option 2: Check the Performance Center version
Make sure that the version of Performance Center service pack level and
feature pack level on all your host and server machines are the same as those
on the Performance Center Web and Utility servers. That is, all Performance
Center hosts and servers MUST be at the same service pack and feature pack
level.
Check the registry entry under
[HKEY_LOCAL_MACHINE\SOFTWARE\Mercury
Interactive\LoadRunner\CurrentVersion] for the following variables:
➤ Major
➤ Minor
➤ ServicePack
Note: Any patches applied to one machine must be applied to all machines
if and when applicable.

Option 3: Launch wlrun.exe manually from the Controller host machine
If the host check succeeds, but the host is still not operational, launch the
Controller manually from the Controller host machine as follows:
1 Log in to the host machine.
2 Configure the wlrun.LrEngine application to run as an interactive user:
➤ Launch dcomcnfg.exe.
➤ In the Application tab, select wlrun.LrEngine from the list of DCOM
applications.
➤ Click Properties to view the properties for wlrun.LrEngine.
➤ In the Identity tab, set the user account to The interactive user.
➤ Click OK and close the DCOMCNFG window.
Note: When you are finished with this step, set wlrun.LrEngine back to
its original identity. By default, this is This User with the Performance
Center user name and password. If you used a different identity, restore
it.
3 Launch the Controller (wlrun.exe) from the <HP Performance Center>/bin
directory (on the Controller host).
4 If an error message is displayed during the startup of the Controller, resolve
the error message before continuing. For Performance Center to utilize the
Controller properly, no error messages should be displayed during startup.
5 Create a new, simple load test and reference the scripts from the LRFS share
(on the Performance Center file server). Run the load test with one or two
users to verify that the Controller works.
Note: Scripts uploaded to the Performance Center LRFS reside in the
\\fileserver\LRFS\<ProjectID>\Scripts directory. To obtain the <ProjectID>,
select User Management > Projects.
The following is an example path to the USR file for a script named MyTest:
\\myserver\LRFS\2\Scripts\MyTest\MyTest.usr
6 Close the Controller (wlrun.exe).
7 From the Performance Center User Site, launch a simple load test.
➤ Check whether the following processes are displayed in the Task Manager
on the Controller host:
➤ OrchidActiveSession.exe OR ORCHID~1.exe
➤ WLRUN.EXE
➤ Check whether the Controller is displaying a dialog box that requires
user input before the Controller can proceed with the load test. Address
the reasons for the dialog box being displayed, and make sure that no
dialog boxes are displayed when re-running the load test from
Performance Center.
Examples of dialog boxes that may be displayed include License Has
Expired, Monitor Not Licensed, and Host is Over-Utilized.
If you are unsure how to resolve the problem indicated in the dialog box,
contact the Customer Support Web site (http://www.hp.com/go/
hpsoftwaresupport) for assistance. Note: When you are finished with this step, set wlrun.LrEngine back to
its original identity. By default, this is This User with the Performance
Center user name and password. If you used a different identity, restore
it.
Option 4: Reinstall Performance Center host on the host machine
If all of the above steps fail to resolve the problem, reinstall the Performance
Center host on the host machine.
1 Uninstall Performance Center (Start > Settings > Control Panel > Add/
Remove Programs). (This is really the Performance Center host.)
2 Delete HKEY_LOCAL_MACHINE\SOFTWARE\Mercury Interactive from the
registry.
3 Clean the Performance Center machine, as described in the section about
cleaning Performance Center machines in the HP Performance Center System
Configuration and Installation Guide.
4 Re-install the Performance Center host. For more information, see the HP
Performance Center System Configuration and Installation Guide. Make sure that
you install the same version of Performance Center as is installed on your
Web server.
Note: Do not install an Performance Center server (such as a Utility Server,
Web Server, or File Server) on the same machine as the Performance Center
host (such as a data processor, Controller, or load generator).

Unix Commands Part-1

Listed here are a few system monitoring commands which should give you a rough idea of how the server is running.
# server information
uname -a

# server config information
prtconf
sysdef -i

# server up time
uptime

# disk free, listed in KB
df -kt

# mounted devices
mount

# network status
netstat -rn

# network configuration info
ifconfig -a

# processes currently running
ps -elf

# user processes
w
whodo
who am i
finger
ps

# virtual memory statistics
vmstat 5 5

# system activity reporter (Solaris/AIX)
sar 5 5

# report per processor statistics (Solaris)
mpstat 5 5
psrinfo

# swap disk status (Solaris)
swap -l

# shared memory
ipcs -b


Solaris note: SAR numbers can be misleading: as memory use by processes is freed, but not considered 'available' by the reporting tool. Solaris support has recommended using the SR (swap rate) column of vmstats to monitor the availability of memory. When this number reaches 150+, a kernel panic may ensue.

System startup

The kernel is loaded by the boot command, which is executed during startup in a machine-specific way. The kernel may exist on a local disk, CD-ROM, or network. After the kernel loads, the necessary file systems are mounted (located in /etc/vfstab), and /sbin/init is run, which brings the system up to the "initdefault" state set in /etc/inittab. Subsystems are started by scripts in the /etc/rc1.d,/etc/rc2.d, and /etc/rc3.d directories.

System shutdown


# shutdown the server in 60 seconds, restart system in administrative state
# (Solaris)
/usr/sbin/shutdown -y -g60 -i1  "System is begin restarted"

# shutdown the server immediately, cold state
# (Solaris)
/usr/sbin/shutdown -y -g0 -i0

# shutdown AIX server, reboot .. also Ctrl-Ctrl/Alt
shutdown -Fr



# restart the server
/usr/sbin/reboot

User accounts

Adding a unix account involves creating the login and home directory, assigning a group, adding a description, and setting the password. The .profile script should then be placed in the home directory.
# add jsmith account .. the -m parm forces the home dir creation
useradd -c "Jim Smith" -d /home/jsmith -m -s "/usr/bin/ksh" jsmith

# change group for jsmith
chgrp staff jsmith

# change jsmith password
passwd jsmith

# change jsmith description
usermod -c "J.Smith" jsmith

# remove ksmith account
userdel ksmith

# display user accounts
cat /etc/passwd

/* here is a sample .profile script, for sh or ksh */
stty istrip
stty erase ^H
PATH=/usr/bin:/usr/ucb:/etc:/usr/lib/scripts:/usr/sbin:.
export PATH
PS1='BOXNAME:$PWD>'
export PS1

Displaying files


# display file contents
cat myfile

# determine file type
file myfile

# display file, a screen at a time (Solaris)
pg myfile

# display first 100 lines of a file
head -100 myfile

# display last 50 lines of a file
tail -50 myfile

# display file that is changing, dynamically
tail errlog.out -f

File permissions

Permission flags: r = read, w = write, x = execute Permissions are displayed for owner, group, and others.
# display files, with permissions
ls -l
# make file readable, writeable, and executable for group/others
chmod 777 myfile

# make file readable and executable for group/others
chmod 755 myfile

# make file inaccessible for all but the owner
chmod 700 myfile

# make file readable and executable for group/others,
# user assumes owner's group during execution
chmod 4755 myfile

# change permission flags for directory contents
chmod -R mydir

# change group to staff for this file
chgrp staff myfile

# change owner to jsmith for this file
chown jsmith myfile

Listing files

See scripting examples for more elaborate file listings.
# list all files, with directory indicator, long format
ls -lpa

# list all files, sorted by date, ascending
ls -lpatr

# list all text files
ls *.txt

Moving/copying files

See scripting examples for moving and renaming collections of files.
# rename file to backup copy
mv myfile myfile.bak

# copy file to backup copy
cp myfile myfile.bak

# move file to tmp directory
mv myfile /tmp

# copy file from tmp dir to current directory
cp /tmp/myfile .

Deleting files

See scripting examples for group dissection routines.
# delete the file
rm myfile

# delete directory
rd mydir

# delete directory, and all files in it
rm -r mydir

Disk usage


# display disk free, in KB
df -kt

# display disk usage, in KB for directory
du -k mydir

# display directory disk usage, sort by largest first
du -ak / | sort -nr | pg

Using tar


# display contents of a file
tar tvf myfile.tar

# display contents of a diskette (Solaris)
volcheck
tar tvf /vol/dev/rdiskette0/unnamed_floppy

# copy files to a tar file
tar cvf myfile.tar *.sql

# format floppy, and copy files to it (Solaris)
fdformat -U -b floppy99
tar cvf /vol/dev/rdiskette0/floppy99 *.sql

# append files to a tar file
tar rvfn myfile.tar *.txt

# extract files from a tar filem to current dir
tar xvf myfile.tar

Starting a process

This section briefly describes how to start a process from the command line.
Glossary:

   & - run in background
   nohup (No Hang Up) - lets process continue, even if session is disconnected


# run a script, in the background
runbackup &

# run a script, allow it to continue after logging off
nohup runbackup &


# Here nohup.out will still be created, but any output will
# show up in test70.log.  Errors will appear in nohup.out.

nohup /export/spare/hmc/scripts/test70 > test70.log &



# Here nohup.out will not be created; any output will
# show up in test70.log.  Errors will appear test70.log also !

nohup /export/spare/hmc/scripts/test70 > test70.log 2>&1  &





Killing a process


1) In your own session;  e.g. jobs were submitted, but you never logged out:

ps                           # list jobs
kill -9  < process id>       # kill it



2) In a separate session

# process ID appears as column 4
ps -elf | grep -i 

kill -9  < process id>       # kill it



3)  For device (or file)

# find out who is logged in from where

w

# select device, and add /dev ... then use the fuser command

fuser -k /dev/pts/3



Redirecting output

Output can be directed to another program or to a file.
# send output to a file
runbackup > /tmp/backup.log

# also redirect error output
runbackup > /tmp/backup.log 2> /tmp/errors.log

# send output to grep program
runbackup | grep "serious"

Date stamping, and other errata

Other errata is included in this section
# Date stamping files
# format is :
# touch -t yyyymmddhhmi.ss filename

touch -t 199810311530.00 hallowfile

# lowercase functions (ksh)
typeset -u newfile=$filename

# date formatting, yields 112098 for example
date '+%m%d%y'

# display a calendar (Solaris / AIX)
cal

# route output to both test.txt and std output
./runbackup | tee test.txt

# sleep for 5 seconds
sleep 5

# send a message to all users
wall "lunch is ready"

# edit file, which displays at login time (message of the day)
vi /etc/motd

# edit file, which displays before login time (Solaris)
vi /etc/issue

WebSphere Server Monitored Parameters

WebSphere servers are monitored based on the following parameters or the attributes listed in the table.

Parameters Description
Monitor Details
WebSphere Version Denotes the version of the WebSphere server monitor.
State Refers to different states of the Websphere server such as running and down.
HTTP Port Refers to HTTP Transport port.
Transaction Details Specifies Global Commit Duration, Committed Transactions, Transactions Rolled Back and Transactions Optimized.
Server Response Time Specifies Minimum, Maximum, Average and Current Response Time.
Availability Specifies the status of the WebSphere server - available or not available.
JVM Memory Usage Specifies the total memory in JVM run time.
CPU Utilization Specifies the average system CPU utilization taken over the time interval since the last reading.
Free Memory Specifies the amount of real free memory available on the system.
Average CPU Utilization Specifies the average percent CPU Usage that is busy after the server is started
Session Details of Web Applications
User Sessions Specifies the total number of sessions that were created.
Invalidated Sessions Specifies the total number of sessions that were invalidated.
Affinity Breaks The total number of requests received for sessions that were last accessed from other Web applications. This value can indicate failover processing or a corrupt plug-in configuration.
EJB Details
Name Mentions the names of the different EJB present in the WebSphere server with JAR and EAR name. Move the mouse pointer over the EJB name to view the JAR and EAR name.
Type Denotes the different types of the bean such as entity, stateless session, stateful session, and message driven.
Concurrent Lives Specifies the number of concurrent live beans.
Total Method Calls Specifies the total number of method calls.
Average Method Response Time Specifies the average time required to respond to the method calls.
Pool Size Specifies the number of objects in the pool (entity and stateless).
Activation Time Specifies the average time in milliseconds that the total bean is activated for that particular Bean container, including the time at the database, if any.
Passivation Time Specifies the average time in milliseconds that the total bean is passivated for that particular Bean container, including the time at the database, if any
Current JDBC Connection Pool Details
Name Mentions the name of the current JDBC Connection pool.
Pool Type Refers to the type of the connection pool.
Create Count Refers to the total number of connections created.
Pool Size Specifies the size of the connection pool.
Concurrent Waiters Specifies the number of threads that are currently waiting for a connection.
Faults Specifies the total number of faults in the connection pool such as timeouts.
Average Wait Time Specifies the average waiting time, in milliseconds, until a connection is granted.
Percent Maxed Specifies the average percent of the time that all connections are in use.
Thread Pool Details
Name Mentions the name of the thread pool.
Thread Creates Specifies the total number of threads created.
Thread Destroys Specifies the total number of threads destroyed.
Active Threads Specifies the number of concurrently active threads.
Pool Size Specifies the average number of threads in pool.
Percent Maxed Specifies the average percent of the time that all threads are in use.

Tuesday, June 5, 2012

How Garbage Collection works in Java

Garbage collection in Java TutorialThis article is  in continuation of my previous articles How Classpath works in Java and How to write Equals method in java and  before moving ahead let's recall few important points about garbage collection in java:

1) objects are created on heap in Java  irrespective of there scope e.g. local or member variable. while its worth noting that class variables or static members are created in method area of Java memory space and both heap and method area is shared between different thread.
2) Garbage collection is a mechanism provided by Java Virtual Machine to reclaim heap space from objects which are eligible for Garbage collection.
3) Garbage collection relieves java programmer from memory management which is essential part of C++ programming and gives more time to focus on business logic.
4) Garbage Collection in Java is carried by a daemon thread called Garbage Collector.
5) Before removing an object from memory Garbage collection thread invokes finalize () method of that object and gives an opportunity to perform any sort of cleanup required.
6) You as Java programmer can not force Garbage collection in Java; it will only trigger if JVM thinks it needs a garbage collection based on Java heap size.
7) There are methods like System.gc () and Runtime.gc () which is used to send request of Garbage collection to JVM but it’s not guaranteed that garbage collection will happen.
8) If there is no memory space for creating new object in Heap Java Virtual Machine throws OutOfMemoryError or java.lang.OutOfMemoryError heap space
9) J2SE 5(Java 2 Standard Edition) adds a new feature called Ergonomics goal of ergonomics is to provide good performance from the JVM with minimum of command line tuning.


When an Object becomes Eligible for Garbage Collection
An Object becomes eligible for Garbage collection or GC if its not reachable from any live threads or any static refrences in other words you can say that an object becomes eligible for garbage collection if its all references are null. Cyclic dependencies are not counted as reference so if Object A has reference of object B and object B has reference of Object A and they don't have any other live reference then both Objects A and B will be eligible for Garbage collection.
Generally an object becomes eligible for garbage collection in Java on following cases:
1) All references of that object explicitly set to null e.g. object = null
2) Object is created inside a block and reference goes out scope once control exit that block.
3) Parent object set to null, if an object holds reference of another object and when you set container object's reference null, child or contained object automatically becomes eligible for garbage collection.
4) If an object has only live references via WeakHashMap it will be eligible for garbage collection. To learn more about HashMap see here How HashMap works in Java.

Heap Generations for Garbage Collection in Java
Java objects are created in Heap and Heap is divided into three parts or generations for sake of garbage collection in Java, these are called as Young generation, Tenured or Old Generation and Perm Area of heap.
New Generation is further divided into three parts known as Eden space, Survivor 1 and Survivor 2 space. When an object first created in heap its gets created in new generation inside Eden space and after subsequent Minor Garbage collection if object survives its gets moved to survivor 1 and then Survivor 2 before Major Garbage collection moved that object to Old or tenured generation.

Permanent generation of Heap or Perm Area of Heap is somewhat special and it is used to store Meta data related to classes and method in JVM, it also hosts String pool provided by JVM as discussed in my string tutorial why String is immutable in Java. There are many opinions around whether garbage collection in Java happens in perm area of java heap or not, as per my knowledge this is something which is JVM dependent and happens at least in Sun's implementation of JVM. You can also try this by just creating millions of String and watching for Garbage collection or OutOfMemoryError.

Types of Garbage Collector in Java
Java Runtime (J2SE 5) provides various types of Garbage collection in Java which you can choose based upon your application's performance requirement. Java 5 adds three additional garbage collectors except serial garbage collector. Each is generational garbage collector which has been implemented to increase throughput of the application or to reduce garbage collection pause times.

1) Throughput Garbage Collector: This garbage collector in Java uses a parallel version of the young generation collector. It is used if the -XX:+UseParallelGC option is passed to the JVM via command line options . The tenured generation collector is same as the serial collector.

2) Concurrent low pause Collector: This Collector is used if the -Xingc or -XX:+UseConcMarkSweepGC is passed on the command line. This is also referred as Concurrent Mark Sweep Garbage collector. The concurrent collector is used to collect the tenured generation and does most of the collection concurrently with the execution of the application. The application is paused for short periods during the collection. A parallel version of the young generation copying collector is sued with the concurrent collector. Concurrent Mark Sweep Garbage collector is most widely used garbage collector in java and it uses algorithm to first mark object which needs to collected when garbage collection triggers.

3) The Incremental (Sometimes called train) low pause collector: This collector is used only if -XX:+UseTrainGC is passed on the command line. This garbage collector has not changed since the java 1.4.2 and is currently not under active development. It will not be supported in future releases so avoid using this and please see 1.4.2 GC Tuning document for information on this collector.
Important point to not is that -XX:+UseParallelGC should not be used with -XX:+UseConcMarkSweepGC. The argument passing in the J2SE platform starting with version 1.4.2 should only allow legal combination of command line options for garbage collector but earlier releases may not find or detect all illegal combination and the results for illegal combination are unpredictable. It’s not recommended to use this garbage collector in java.

JVM Parameters for garbage collection in Java
Garbage collection tuning is a long exercise and requires lot of profiling of application and patience to get it right. While working with High volume low latency Electronic trading system I have worked with some of the project where we need to increase the performance of Java application by profiling and finding what causing full GC and I found that Garbage collection tuning largely depends on application profile, what kind of object application has and what are there average lifetime etc. for example if an application has too many short lived object then making Eden space wide enough or larger will reduces number of minor collections. you can also control size of both young and Tenured generation using JVM parameters for example setting -XX:NewRatio=3 means that the ratio among the young and tenured generation is 1:3 , you got to be careful on sizing these generation. As making young generation larger will reduce size of tenured generation which will force Major collection to occur more frequently which pauses application thread during that duration results in degraded or reduced throughput. The parameters NewSize and MaxNewSize are used to specify the young generation size from below and above. Setting these equal to one another fixes the young generation. In my opinion before doing garbage collection tuning detailed understanding of garbage collection in java is must and I would recommend reading Garbage collection document provided by Sun Microsystems for detail knowledge of garbage collection in Java. Also to get a full list of JVM parameters for a particular Java Virtual machine please refer official documents on garbage collection in Java. I found this link quite helpful though http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html

Full GC and Concurrent Garbage Collection in Java
Concurrent garbage collector in java uses a single garbage collector thread that runs concurrently with the application threads with the goal of completing the collection of the tenured generation before it becomes full. In normal operation, the concurrent garbage collector is able to do most of its work with the application threads still running, so only brief pauses are seen by the application threads. As a fall back, if the concurrent garbage collector is unable to finish before the tenured generation fill up, the application is paused and the collection is completed with all the application threads stopped. Such Collections with the application stopped are referred as full garbage collections or full GC and are a sign that some adjustments need to be made to the concurrent collection parameters. Always try to avoid or minimize full garbage collection or Full GC because it affects performance of Java application. When you work in finance domain for electronic trading platform and with high volume low latency systems performance of java application becomes extremely critical an you definitely like to avoid full GC during trading period.

Summary on Garbage collection in Java
1) Java Heap is divided into three generation for sake of garbage collection. These are young generation, tenured or old generation and Perm area.
2) New objects are created into young generation and subsequently moved to old generation.
3) String pool is created in Perm area of Heap, garbage collection can occur in perm space but depends upon JVM to JVM.
4) Minor garbage collection is used to move object from Eden space to Survivor 1 and Survivor 2 space and Major collection is used to move object from young to tenured generation.
5) Whenever Major garbage collection occurs application threads stops during that period which will reduce application’s performance and throughput.
6) There are few performance improvement has been applied in garbage collection in java 6 and we usually use JRE 1.6.20 for running our application.
7) JVM command line options –Xmx and -Xms is used to setup starting and max size for Java Heap. Ideal ratio of this parameter is either 1:1 or 1:1.5 based upon my experience for example you can have either both –Xmx and –Xms as 1GB or –Xms 1.2 GB and 1.8 GB.
8) There is no manual way of doing garbage collection in Java.

Read more: http://javarevisited.blogspot.com/2011/04/garbage-collection-in-java.html#ixzz1wub3g08g