Friday, September 24, 2010

The Care and Feeding of Your z/VSE TCP/IP Stack

I recently suggested to a customer that they might get better throughput with their application by changing the priority so that the application was running higher priority than the TCP/IP stack partition. I received a questioning response. Really? Doesn't the stack always run higher priority than the applications it services? My answer was, nice rule but not always true.

By way of explanation, we can look at throughput as the care and feeding of your application or of the TCP/IP stack.

Feeding the TCP/IP stack is something you do when your application is sending data. Your application feeds the TCP/IP stack data. The data is queued into the stack's transmit buffer. Keeping the transmit buffer full makes sure the stack always has data to transmit. Once the stack has taken all the data sent and queued it into the transmit buffer, the application's send request is posted.

You can see that if the stack is busy and your application is running lower priority than the stack, your application may not get dispatched to send more data until a later time. It is possible that the stack will transmit all the data in the transmit buffer before your application can send more data. This leaves the the transmit buffer empty and can reduce throughput.

One way to ensure the stack has data available to queue into the transmit buffer is to use large send buffers. Large send buffers (for example, 1MB send buffers) can help keep the stack's transmit buffer full of data to send. Using large send buffers is most helpful when you are using network interfaces that have large MTU sizes, like a Hipersockets network interface.

Feeding your application is something the TCP/IP stack does. When data arrives from the network it is placed in the stack's receive buffer and the application is posted that data is available. The application must then issue a read for the data. If the application is running lower priority than the stack it may be some time before the application actually gets to run to read and process the data. In fact, in the worst case, the stack's receive buffer may actually become full forcing the stack to close the TCP window and stopping the data flow from the remote host.

Wow, it sounds like I should run all my applications higher priority than the TCP/IP stack. No, not at all. In practice, only bulk data transfer applications run into these types of problems.

The general rule of running the stack higher priority than the applications it services would apply to almost all applications. For example, interactive and multi-user applications like CICS TS, your TN3270E server and even DB2 (as examples) actually benefit from having the TCP/IP stack running at a higher priority.

In addition, applications that are primarily sending data out into the network generally show little throughput increase by running higher priority than the TCP/IP stack. Keeping the stack's transmit buffer full is generally pretty easy even if your application is running lower priority than the TCP/IP stack. However, this assumes that the application has access to the CPU when it is needed to allow it to keep the stack's transmit buffer full.

What does this leave? Perhaps a batch FTP retrieving data from the network, your FTP server partition or the IBM VTAPE (TAPESRVR) partition might benefit from running higher priority than the TCP/IP stack partition.

So there you have it. You can make a case for running some applications higher priority than the TCP/IP stack.

No comments: