Network Performance Analysis

CS 441 Lecture, Dr. Lawlor

Here's a trivial network server that accepts data.  It doesn't do anything with the data, making it a sort of data "black hole".
// Black hole TCP server: clients connect, send data, that's it.
#include "osl/socket.h"
#include "osl/socket.cpp"

int foo(void) {
unsigned int port=8081;
SERVER_SOCKET serv=skt_server(&port);
cout<<"Created server on port "<<port<<"\n";
while (true) {
while (skt_select1(serv,500)==0) {
//cout<<"Still waiting for connection...\n";
}
//cout<<"Accepting connection from client;\n";
if (fork()==0)
{ // I am the fork-child: my job is to talk to this client
skt_ip_t clientIP; unsigned int clientPort;
SOCKET s=skt_accept(serv,&clientIP,&clientPort);
char clientName[1000]; skt_print_ip(clientName,clientIP);

//cout<<"Accepted client "<<clientName<<" port "<<clientPort<<"\n";
unsigned int n=0;
skt_recvN(s,&n,sizeof(n));
if (n<=16*1024*1024) {
char buf[n];
skt_recvN(s,buf,n);
}
skt_close(s);
exit(0); /* I exit happily */
}
}
return 0;
}

(Try this in NetRun now!)

You can't actually run this on lawlor.cs, because I'm running a copy already, so you get a "fatal error binding server socket".  In general, every TCP port number has at most one program listening on it, so you need to pick a different port number than anything else running on that machine.

Here's the corresponding client.  Note that every "recv" in the server is matched by a "send" in the client (and, in general, vice versa).
// Black hole TCP client: sends data into the black hole
#include "osl/socket.h"
#include "osl/socket.cpp"

int foo(void) {
unsigned int port=8081;
/* This is lawlor.cs's IP address. I need to hardcode the IP for NetRun... */
skt_ip_t ip=skt_lookup_ip("137.229.25.247");
for (int rep=0;rep<10;rep++) {
double start1=time_in_seconds();
SOCKET s=skt_connect(ip,port,2);
int n=1000;
double start2=time_in_seconds();
skt_sendN(s,&n,sizeof(n)); /* send byte count first... */
char buf[n];
skt_sendN(s,buf,n); /* ...then send data */
//for (int i=0;i<n;i++) skt_sendN(s,&buf[i],1); // one byte at a time
skt_close(s);
double elapsed1=(time_in_seconds()-start1)*1.0e9;
double elapsed2=(time_in_seconds()-start2)*1.0e9;
printf("That run: %.0f ns including connect, %.0f ns data\n",elapsed1,elapsed2);
}
return 0;
}

(Try this in NetRun now!)

Typical timing from 64-bit server to 32-bit lawlor.cs:
That run: 295877 ns including connect, 11921 ns data
The bottom line is that:
Curiously, the "Time" checkbox runs the code so often, that it seems to trigger some sort of denial-of-service preveCnter inside the kernel, resulting in anomalously slow performance.