2
$\begingroup$

I am working in 8-D parameter space, where every parameter is on the interval [0, 1]. The number of local maxima in this space and how they are positioned relative to one another is way more interesting than the exact value of the global maxima. Since gradient ascent algorithms can actually miss global maxima because they get stuck at local maxima, I thought I could use their biggest 'flaw' to my advantage.

My plan was to pick 50,000 or so points evenly spaced throughout my parameter space and run a GA program with each as the initial guess. The problem I have is that calculating the gradient at a single point is computationally taxing, so running this many GA programs is not feasible on my computer. The only solution I can think of is to use even less initial points, but then the chances of me missing a local maxima is even greater. Is there some other way to go about this that I am not thinking of?

$\endgroup$
2
  • 1
    $\begingroup$ You can always use derivate-free methods, if this is an issue. $\endgroup$ Commented Jun 28, 2021 at 6:46
  • $\begingroup$ Some suggestions about optimization of expensive functions are here stats.stackexchange.com/questions/193306/… but are primarily oriented around finding the best optimum, not all optima. $\endgroup$ Commented Jun 28, 2021 at 12:53

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.