Convert numbers between binary, octal, decimal, and hexadecimal. Edit any field and all others update instantly. Supports negative numbers.
To convert a decimal number to binary manually: divide the number by 2 repeatedly, recording the remainder (0 or 1) each time, until the quotient is 0. Read the remainders from bottom to top. Example: 42 ÷ 2 = 21 r0; 21 ÷ 2 = 10 r1; 10 ÷ 2 = 5 r0; 5 ÷ 2 = 2 r1; 2 ÷ 2 = 1 r0; 1 ÷ 2 = 0 r1. Reading bottom to top: 101010. So 42 in decimal = 101010 in binary. To verify: 1×32 + 0×16 + 1×8 + 0×4 + 1×2 + 0×1 = 32+8+2 = 42.
Hexadecimal (base-16) uses 16 digits: 0-9 and A-F (where A=10, B=11, C=12, D=13, E=14, F=15). Programmers prefer hex because each hex digit represents exactly 4 binary bits (a nibble), making hex a compact way to represent binary data. Two hex digits represent one byte (8 bits). Examples of hex in computing: memory addresses (0xFF1A2B), HTML color codes (#1A2B3C), MAC addresses, UUID values, SHA hashes, and raw byte representations. The prefix 0x or # indicates hexadecimal in code. FF in hex = 255 in decimal = 11111111 in binary.
Language-level prefixes for number literals: 0b (or 0B) prefix indicates binary in most languages: 0b1010 = 10 in decimal. Supported in Python, JavaScript (ES6+), Rust, Swift. 0o (or 0O) prefix indicates octal: 0o12 = 10 in decimal. Supported in Python, JavaScript (ES6+). In older languages/shells, a leading 0 meant octal (e.g., 012 in C means 10 — a common source of bugs). 0x (or 0X) prefix indicates hexadecimal: 0xFF = 255 in decimal. Supported in almost every language. In CSS, hex colors use # instead: #FF0000 for red. Knowing these prefixes is essential for reading binary data, bitwise operations, memory addresses, and device registers.
An n-bit unsigned integer can represent 2^n values from 0 to 2^n − 1. 8-bit: 0 to 255 (2^8=256 values) — used for byte values, ASCII characters, RGB color channels. 16-bit: 0 to 65,535 (2^16=65,536 values) — used in older systems, audio sample values, Unicode code points (Basic Multilingual Plane). 32-bit: 0 to 4,294,967,295 (about 4.3 billion) — standard integer in most programming languages, IPv4 addresses, Unix timestamps (until 2038). 64-bit: 0 to 18.4 quintillion — modern 64-bit systems and JavaScript's number type (though JS uses IEEE 754 doubles with 53-bit integer precision). Signed integers use one bit for the sign, reducing the positive range by half.
Computers store numbers in binary (base-2), but humans work in decimal (base-10). Hexadecimal (base-16) bridges both worlds — each hex digit represents exactly 4 binary bits, making it compact and easy to convert. Octal (base-8) was common in older systems for similar compactness reasons.
Decimal 255 = Hex FF = Binary 11111111 = Octal 377. Decimal 16 = Hex 10 = Binary 10000 = Octal 20. Decimal 42 = Hex 2A = Binary 101010 = Octal 52. In code, use prefixes: 0b for binary, 0o for octal, 0x for hexadecimal.
The bit-length indicators show which integer sizes can hold the current value. An 8-bit integer holds 0–255. 16-bit: 0–65,535. 32-bit: 0–4,294,967,295. 64-bit: up to ~18.4 quintillion. Signed integers use one bit for the sign, halving the positive range.