Let me descirbe the problem: I have a client-server application that i would like to perform some tests on. Basically one of the test involves measuring the traversal time for a data packet to arrive at its destination. What i am doing right know just sending 10 byte packet (50 of those) and measuring the time it takes to reach its destination.
So say i have a set of 50 values each denoting the traversal time. How would a go about calculating the confidence interval?
Is it as simple as just calculating:
the mean $\bar{x} = \frac{1}{n}\sum\limits_{j=1}^n x_j$
the standard deviation $\sigma = \frac {S}{\sqrt{n}}$, where $S = \sqrt{\frac{1}{n}(\sum\limits_{j=1}^n{x_j^2}-\frac {1}{n}(\sum\limits_{j=1}^nx_j)^2}$
$\bar{x}\pm1.96\times\sigma$, for a 95 % confidence.
I suspect that this might be the wrong approach. In that case, why?
Yours truly