Of course, that (a bitset using a bitwise AND to check individual flag values) would be the most compact and elegant way of doing it, but I've had a difficult time trying to explain it to other "developers". (And by that, I mean "professionals" working at the low-hanging-fruit level, like VB/VBS/ASP -- where OR is always bitwise -- Java, and PHP. It's not that the languages make people dumb, but that the bar to entry is set low, so that beginning developers are rarely exposed to binary unless they want to be.) There seems to be some reluctance to believe that multiple flag values based on 2^n won't cause ambiguities when stored in an int/long.
If you understand that 1335 can only happen if the bits representing 1024, 256, 32, 16, 4, 2 and 1 (or 2^10, 2^8, 2^5, 2^4, 2^2, 2^1 and 2^0) are "on" and all others are "off" (since 1335 in binary, as a 4-byte integer, is 00000000 00000000 00000101 00110111) then you can send the equivalent of "show the zeroth, first, second, fourth, fifth, eighth and tenth tabs" from system to system using just the value 1335. Remember to work from right to left -- that way your array indices will correspond exactly with the exponent.
You can build the value to send using either simple addition or a bitwise OR. 0 | 256 results in 256, just as 0 + 256 does. And on the decoding side, 1335 & 256 will result in 256, since the only "true" bit common to both numbers is the one representing 256. The actual implementation I'll leave as an exercise for the reader -- your code may be such that using a zero-fill bitwise right shift and looking at the least significant bit in a loop may work best, or the flag values may be different enough in meaning that defining constants and doing a direct AND comparison may be the best approach.
I'd stick to a max of 31 bits, though, since maintaining signed/unsigned across languages/databases isn't always reliable (unsigned values aren't always available in the target environment, but 32-bit signed integers are something you can pretty much count on), making the use of the high-order bit a gamble. Misson mentioned that in his post (it's also the primary reason for the guard check and early return on the log2 function he posted in another thread). If you're dealing with a signed data type, a 1 in the most significant bit indicates a negative number, and the mantissa of a negative number is (usually) the two's complement of the positive number. That's another way of saying the numbers get weird and unintuitive (bitwise ops still work but arithmetic ops don't) so it's best to avoid the issue altogether by sticking to the lower 31 bits.