What Is Hash Generation?
A hash function takes an input of any size — a single character, a 10 GB file, an empty string — and produces a fixed-length output called a digest. MD5 outputs 128 bits (32 hex characters), SHA-1 outputs 160 bits (40 hex characters), SHA-256 outputs 256 bits (64 hex characters), and SHA-512 outputs 512 bits (128 hex characters). The same input always produces the same output, every time, on every machine. Change a single bit in the input and the output changes completely — this is called the avalanche effect. Good hash functions make it practically impossible to predict how the output will shift from a small input change.
Hash functions are one-way by design. You can go from input to digest in milliseconds, but going from digest back to input is computationally infeasible for a well-designed algorithm. This one-way property is what makes hashing useful for password storage, data integrity checks, digital signatures, and content addressing. When you store a password hash in your database, an attacker who steals the database gets digests — not plaintext passwords. They would need to try every possible input and compare digests to recover the originals, which brings us to the topic of brute-force resistance and why algorithm choice matters.
Not all hash algorithms offer the same security guarantees. MD5 was published in 1992 by Ronald Rivest and was widely used for over a decade. By 2004, researchers demonstrated practical collision attacks — two different inputs producing the same MD5 digest. SHA-1, published by NIST in 1995, held up longer but was theoretically broken by 2005 and practically broken by Google's SHAttered attack in 2017, which produced two different PDF files with the same SHA-1 hash. SHA-256 and SHA-512, both part of the SHA-2 family designed by the NSA, remain secure as of today. For anything security-critical — password hashing, certificate verification, code signing — SHA-256 is the minimum standard you should accept.