OK, so Core Hadoop is HDFS with Map/Reduce.
HDFS is the Hadoop Distributed File System. Much like Gluster and Ceph, it uses local disk and replication across nodes to provide a resilient cluster FS.
Map/Reduce is a method of writing parallel applications that find (map) data and return filtered (based on your logic) results (reduce) back to the user. Hadoop provides the API with Map/Reduce functions.
Since HDFS distributes code to "smart" nodes, it can process data locally and is really fast for Table Scan type functions.
We're building our cluster to help do analytics on our Sales and Order Data, Medical Trial results, and other stuff.
I'd grab a copy of Hadoop Operations or Hadoop in Action.
Here's a good visual. Traditional Database is like you searching thru a deck of shuffled cards searching for the Ace of Spades. Hadoop is like getting a group of 52 10 year-olds and giving each a card. You ask who has the Ace and you get the answer instantly. They smaller and less intelligent than you, but they can still find the card faster. So instead of building monster sized servers for databases, you buy lots of smaller, cheaper systems. Even with the duplication of data, our cost per GB of storage has dropped by 90%.
Hadoop is kewl.