This comes down to implementation details of the RAID layer(s).
For example, if you use a hardware RAID to do the striping, and then software RAID on top to mirror the stripe groups, then the software RAID that's doing the mirroring won't see the individual disks, just the two devices generated by the hardware RAID representing each stripe group.
In such a configuration, when Disk 1 fails, the hardware RAID controller will return errors on the Group 1 device. The software RAID on top must then consider the entire Group 1 as failed, as it won't see the individual disks.
If Disk 5 then fails, the Group 2 device will start returning errors too, and as far as the software RAID doing the mirroring is concerned, that's a double failure - data is lost. Game over.
If you try using disks 4, 2 and 3 for recovery, there's the problem that after Disk 1 failed, the software RAID will have stopped updating the entire Group 1. So Disk 4 would have newer data than Disks 2 and 3, unless both disks failed at the exact same time... which is unlikely. And because of striping, any contiguous piece of data that is longer than one stripe will have a risk of incorporating parts of both "older" and "newer" sets of stripes, resulting in a corrupted mess.
If both striping and mirroring are done by the same RAID implementation, i.e. either just a hardware RAID controller that can do "RAID 10" or "RAID 0+1", or just a software RAID implementation that can do the same, then the implementation might be smart enough to keep updating Disks 2 and 3 after Disk 1 fails, even though the Group 1 stripe set will no longer be complete. If disk 5 then fails too, the controller may be smart enough to see that Disks 4 + 2 + 3 together form a valid set, and keep on running.
Whenever the same RAID implementation handles both the striping and the mirroring, modern implementations usually work the way you seem to be thinking - they track the health of each copy of each set of stripes and will keep working as long as a complete set of stripes can be found.
However, this is not something you should blindly trust: you should carefully research your RAID implementation in advance, so that when (not "if"!) disks start failing, you'll know what your RAID implementation can and cannot do.
And if you layer different RAID implementations on top of each other (for example, if you use OS's built-in software RAID to mirror data between two large SAN storage systems located in different buildings for disaster tolerance) you should carefully think through the failure scenarios - in the design phase, before you even start implementing your set-up.