On a microcontroller (more specifically, on an Arduino Uno board using the ATmega 328P microcontroller) I would normally use an infinite loop to check for inputs etc (in Arduino land, this is normally the loop() function). If I leave this function blank however, it doesn't cause any problems.
Classical programming pattern, having a main loop…
On a desktop / laptop with an Intel i7 CPU etc if I ran a similar infinite loop (with nothing to do, or very little to do) it would pin the CPU at ~100% and generally spin up the fans etc ( a delay could be added to prevent this for example).
… we might be writing different main loops.
This same main loop would be bad practice on a microcontroller too, because that also runs the CPU of that at full load – which burns power. Don't do that, especially if you're on battery.
Modern CPU cores have synchronization mechanisms. That allows people to implement something like "let this loop's execution sleep until 1 ms has passed, or until this condition has changed".
That's basically at the core of any multi-task operating system – and basically all OSes that deserve that name are by now. On microcontrollers, you'll often find so-called RTOSes (real-time operating systems), which make guarantees on how sure you can be that the execution of something has started after so and so many nanoseconds, because that's typical for the use case of microcontrollers, whereas on desktops and server CPUs you'll usually find fully-fledged simultaneous multiprocessing OSes that make fewer guarantees on timing, but offer a much larger set of functionalities and hardware and software environment abstraction.
I don't know the Arduino execution environment well enough to actually make qualified statements about it, I'm researching this as I write: Arduino seems not designed for this – it really expects you to just spin busily. Since it has no "yield" functionality, the "housekeeping" that it does between calls to your loop can't get called when you use the built-in delay function. Ugh! Bad design.
What you'd do in a power and/or latency-aware design, you'd use an RTOS for your microcontroller – FreeRTOS is pretty popular, for the ARM Cortex-M series, mbed has a lot of traction, I personally like ChibiOS (but I don't think that's a good choice when switching over from Arduino sketches), the Linux Foundation is pushing Zephyr (which I'm conflicted about); really, there's a wealth of choices, and the manufacturer of your microcontroller usually supports one or multiple through their IDEs.
Why is this seemingly ok on a microcontroller but not usually wanted on a microprocessor?
It's not really OK, it's an unusual design pattern, in fact, for microcontrollers, which typically do things in regular intervals or react to external stimuli. It's not usual that you want to "use as much CPU as you can" on a microcontroller continuously.
There's exceptions to that pattern, and they exist both in the MCU as well in the server/desktop processor world; when you know you practically always have e.g. network data to process in a switch appliance, or when you know that your game could always already precompute a bit of world that you might or might not need in a few moments, then you'll find these spin loops. In some hardware drivers, you'll find "spin locks", meaning that the CPU continuously queries a value until it has changed (e.g. the hardware is done setting up and can be used now), but it's generally an emergency solution only, and you'll have to explain why you're doing that when trying to get such code into Linux, for example.
Am I right in thinking that the ATmega is in fact running at 100%, and that because it is so low powered it doesn't cause any obvious heat problems?
Yes. The ATMega isn't, by modern standards, low powered, but it's low-power enough for the heat not to become a problem.
HLT(halt) instruction instead of actually looping? \$\endgroup\$