Certainly, you can cast the int * of &some_int as eluded to by others. Here's a strongly nonportable macro along those lines that'll work for any type of integer (must be stored in an lvalue): #define nth_byte(lvalue, n) ((char *) &(lvalue))[n]
On a sidenote, sometimes bytes are larger than 8 bits. We'll come back to that. Imagine using union to type-pun an int and assuming any resulting padding bytes to be "well defined since C99"... my non-portable crud works for any type and doesn't use any function calls either (not that this matters since we have good compilers that can quickly perform complex optimisations like hoisting and dead code elimination)
The AI raises a good point however. We shouldn't be relying upon machine endianness and then codifying translation rules per machine when we can use shift operations to construct and deconstruct the values instead, thus choosing to explicitly encode our own way. You need to know minimums of short, int, long and long long, and the maximums of their unsigned equivalents. I find it's best to stick to the minimums from the stardard. The maximum int is 32 bits in my code, just like in the standard and so my code will work on any system that uses C, albeit a little inefficiently until it's tuned (just like all...). Anyway, see if you can spot the main pro before I mention it; here's the equivalent (nonportable) macro: #define nth_byte(value, n) (value >> (CHAR_BIT * n)) & UCHAR_MAX
Idk what the AI is saying about C# memory safe because casting pointer types wildly? (Irrelevant sidenote: C# actually memory unsafe when you use unsafe qualifier, which introduces this kind of wild pointer casting).
On a more related sidenote, the error modes of right shifting are (erroneous and) less serious than those of relying upon internal representations to match between machines; right-shifting a negative value is implementation-defined behaviour, meaning they're required to document it in the manual. This explains why we should read a book to learn C then read the manuals for our compilers to explore these other neat things like UBSan.
Shifting left out of bounds is more like what you get if you try to access the 8th byte of a 4-byte int... well, it plays nice if you use int unsigned. Negative values cause the problem, which is lack of documentation. When code needs porting, these are the issues we need documentation for.
One of the early standard rationale documents makes the reasoning behind this kind of behaviour clear: C was intended to be a highly efficient language, and so we have undefined behaviour where a class of error was too computationally difficult to correct, or where an opportunity was present to hand-optimise your code and you were doing something out of the spirit of those keywords (const, volatile, register, restrict and for chux there's inline). The result of UB isn't required to be documented let alone stay consistent. To give you some idea of the threshold here, it was deemed too costly to map the side-effects in the first argument of a function call to the correct machine order, so because this error was so taxing printf("%d %d", x++, x++) is undefined behaviour. So in reality different architectures print different results and it's not required to be documented... some day an implementation might crash, we call that UBsan. Nowadays our compilers do PGO so ... what is with those classes of errors that WERE once computationally difficult? Type punning is still possibly implememtation-defined behaviour because AFAIK the method for determining those implementation-defined padding bits isn't required to be documented in the compilers manuals.
Anyway the pros of using the shift operator here is that it's much more clean to express and supports passing in constants as well as lvalues, so you can do nth_byte(42,0) for example. Also, coming back to it: the macro using the right shift contains the implementation defined CHAR_BIT and UCHAR_MAX. See if your manual contains instructions telling you how to do that, and then never do that, because pretty sure that's listed as UB. Just shift 8 bits at a time instead of CHAR_BIT, assume a maximum of 255 instead of using UCHAR_MAX when reading from or writing to disk or the wire. Trust me, you want a portable prototype then optimise it per platform.
typedef char BYTE;is bad practice. Instead you should be usinguint8_tfrom stdint.h.int, or do you actually want a generic function andintis just a placeholder?