Short answer:
In C/C++, the SAME bits are used in both unsigned and signed chars, but they are INTERPRETED differently by the compiler.
An unsigned char can be used for a number from 0 to 255,
and a signed char for a number from -128 to 127.
(Note: on the Internet, there are conflicting answers for signed char, but it *must* be this range, see example below...)
Here is an example with a four-bit char for educational purposes to save space...
Say a char had 4 bits (it doesn't, but say it does), then
bits unsigned signed
char char
0000 0 0
0001 1 1
0010 2 2
0011 3 3
0100 4 4
0101 5 5
0110 6 6
0111 7 7
1000 8 -8 same bits are interpreted differently
1001 9 -7
1010 10 -6
1011 11 -5
1100 12 -4
1101 13 -3
1110 14 -2
1111 15 -1 biggest unsigned is smallest negative signed
How does this work? Because 1 + -1 = 0, right?
In bits, 1111 + 0001 = 10000 (but there are only 4 bits, so the highest one is thrown away, then we have
1111 + 0001 = (1)0000 we ignore the highest 1
similarly
1110 + 0010 = (1)0000, -2 + 2 == (1)0000 (both numbers are signed)
1101 + 0011 = (1)0000, -3 + 3 == (1)0000
1100 + 0100 = (1)0000, -4 + 4 == (1)0000
and so on
What if both numbers are unsigned?
then
1111 + 0001 == (1)0000, 15 + 1 == (1)0000 (remember, in 4-bit chars, 15 is the largest number)
1110 + 0010 == (1)0000, 14 + 2 == (1)0000
1101 + 0011 == (1)0000, 13 + 3 == (1)0000
1100 + 0100 == (1)0000, 12 + 4 == (1)0000
Wait, what?! How can 1111 == 15 (for an unsigned char), AND 1111 == -1 (for a signed char)?
It's all a matter of interpretation.
The SAME bits are present in both cases, but if you tell the compiler
those bits represent an unsigned char, it will interpret them in one way,
and if you tell them they represent a signed char, the computer will interpret them another way.
Hope that helps.