I am trying to use the OWW notebook for training and I can get a .onnx file but it is weird because it also requires an onnx.data file...
(venv) jrg ~/code/CodeMash/train-word > ls -al
total 113112
-rw-r--r-- 1 jrg staff 5236352 Nov 23 21:48 hello_robot.npy
-rw-r--r-- 1 jrg staff 12845 Nov 23 21:57 hello_robot.onnx
-rw-r--r-- 1 jrg staff 348160 Nov 23 21:57 hello_robot.onnx.data
But I try running with them and I get
❌ [error] Error in wake word detection
Object {error: "Got invalid dimensions for input: input for the fo…lease fix either the inputs/outputs or the model."}
error = "Got invalid dimensions for input: input for the following indices\n index: 1 Got: 16 Expected: 28\n Please fix either the inputs/outputs or the model."… (length: 148)
[[Prototype]] = Object
logger.js:40ℹ️ [info] ✅ Welcome message spoken
So apparently I am generating with the incorrect amount of dimensions, how do I fix it. The original code is
# Load the data prepared in previous steps (it's small enough to load entirely in memory)
negative_features = np.load("negative_features.npy")
positive_features = np.load("hello_robot.npy")
X = np.vstack((negative_features, positive_features))
y = np.array([0]*len(negative_features) + [1]*len(positive_features)).astype(np.float32)[...,None]
# Make Pytorch dataloader
batch_size = 512
training_data = torch.utils.data.DataLoader(
torch.utils.data.TensorDataset(torch.from_numpy(X), torch.from_numpy(y)),
batch_size = batch_size,
shuffle = True
)
layer_dim = 32
fcn = nn.Sequential(
nn.Flatten(),
nn.Linear(X.shape[1]*X.shape[2], layer_dim), # since the input is flattened, it's timesteps*feature columns
nn.LayerNorm(layer_dim),
nn.ReLU(),
nn.Linear(layer_dim, layer_dim),
nn.LayerNorm(layer_dim),
nn.ReLU(),
nn.Linear(layer_dim, 1),
nn.Sigmoid(),
)
loss_function = torch.nn.functional.binary_cross_entropy
optimizer = torch.optim.Adam(fcn.parameters(), lr=0.001)
output_path = "hello_robot.onnx"
torch.onnx.export(fcn, args=torch.zeros((1, 28, 96)), f=output_path) # the 'args' is the shape of a single example