Suppose I measure 100 samples of a normal distribution and use them to compute a standard deviation.
Is there a way to compute +/- error bounds on my computed mean value for standard deviation if I want to know what the true standard deviation is with 95% confidence level had I measured 1 million samples instead of the original 100 samples?
Practical application: I characterize 100 units with the intent to create a max and a min specification for my product's standard deviation. Customer buys 1 million units and wants to know with 95% confidence what max and min values for standard deviation we guarantee. How can I create a specification for my product's datasheet that satisfies customer's interest when I measure less and the customer buy's more?