1
$\begingroup$

I have a classification model that is trained to predict the probability of approve/reject on an object, and we use the output of the final sigmoid as our final score to rank these object.

These objects have a high number of dimensions, so when a user rejects an object, we also ask for explicit reason why it was rejected i.e. (rejected, reason_X). My question is how do I incorporate this explicit signal into our model? We don't wait to wait for 100s of examples, with the same rejection reason, for the model to eventually adjust weights.

I had the following idea:

If an object's rejection reason corresponds to one of the features in the representation of the object (i.e. "object was too X"), we take that object and use it immediately in training the model. And since we explicitly know the problematic feature, we only keep that feature, and we "zero out" the other features - or somehow freeze the other input layers of the network.

The idea is to only backprop the paths corresponding to rejection reason/feature - but my concern is that artificially constraining the NN like this messes up its behavior.

Does this make sense or is there a better approach? Is there a name for this problem/solution?


If there would be a way to extend this to cases where we dont have an explicit feature that maps to the given reason, that would be even better - but maybe requires a separate post.

$\endgroup$

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.