Please see below several approaches in both a canonical way and an enviromental-specific examples (python, numpy, scikit eco-system)
First, I needed to reconstruct a somewhat similar sample (python / numpy):
import numpy as np
import matplotlib.pyplot as plt
m = 10
n = 5
a = np.arange(-m / 2, m / 2)
a = (a ** 3)
for i in range(n):
b[i] = a + (i * np.random.normal(scale=3.)) + np.random.rand(m)
fig, axs = plt.subplots(1, 1, sharex=True)
for i in range(n):
axs.plot(b[i])

Then, added some extra noise to the last curve:
b[-1] += np.random.rand(m) * 100
fig, axs = plt.subplots(1, 1, sharex=True)
for i in range(n):
axs.plot(b[i])

Now for the first approach - statistical analysis using the Correlation Coefficient Matrix:
The hypothesis: the Correlation Coefficient Matrix of the curves should indicate the "rogue" curve's corr vector with the rest of the curves.
You'll need to test it on your real data, but it seems the the similar curves will get a high (close to 1.0) corr score. Then, you'll want to define some eps for thresholding (1 - eps) correlations for a suspicous or "Unhealthy" state.
np.corrcoef(b)
array([[1. , 0.99999092, 0.99995627, 0.99995058, 0.88247431],
[0.99999092, 1. , 0.99997068, 0.9999665 , 0.88147866],
[0.99995627, 0.99997068, 1. , 0.99998015, 0.88093051],
[0.99995058, 0.9999665 , 0.99998015, 1. , 0.88041311],
[0.88247431, 0.88147866, 0.88093051, 0.88041311, 1. ]])
In the above sample test it's clear that the purple curve's corr vector members are below the threshold.
[0.88247431, 0.88147866, 0.88093051, 0.88041311]
A second approach - Outlier Detection:
The hypothesis: an Unsupervised Outlier Detection algorithm, feeded with the curves' values, should detect the "rogue" curve as an Outlier.
I recommend trying several Unsupervised Outlier Detection algorithms. See here for a nice illustration of optional algos (python / scikit-learn).
For example, I tried the LocalOutlierFactor:
from sklearn.neighbors import LocalOutlierFactor
X = b
clf = LocalOutlierFactor(n_neighbors=3)
clf.fit_predict(X)
array([ 1, 1, 1, 1, -1])
The result tells us it suspects the last curve..
Looking at the more detailed score negative_outlier_factor_ "... The opposite LOF of the training samples. The higher, the more normal. Inliers tend to have a LOF score close to 1 (negative_outlier_factor_ close to -1), while outliers tend to have a larger LOF score ..."
clf.negative_outlier_factor_
array([-1.06224748, -1.05244769, -0.94720943, -0.94720943, -3.47148487])
Curves #1-4 are arround 1 +- eps, while #5 is clearly far from the gang.. (-3.47148487)
Another approach - Cluster Analysis:
The hypothesis: a clustering algo should detect the "healthy" curves as 1 (or maybe more) dense clusters, where the "rogue" curves should't belong. Whether it'd mark them as outliers, or their cluster properties would point to that - is more by design of the specific algo)
Look for algo / implementations that:
- Do not require the number of clusters in advance (or try such one with k=1, with the hypothesis that all curves, except for the outliers, are in the same cluster)
- Provide some kind of scoring
- Provide some kind of outlier indication
- Preffer the density-based ones (that's more from experince and personal preference)
You could then test by visual inspection and / or ploting after dimentionality reduction of 2 / 3 (for 2d / 3d)
For example: HDBSCAN's Outliers Detection