An open source platform for the distributed processing of structured, semi- and unstructured data. Providing the platform for IBM and Hortonworks enterprise-grade distribution.
What is Apache Hadoop®?
Apache Hadoop offers highly reliable, scalable, distributed processing of large data sets using simple programming models. With the ability to be built on clusters of commodity computers, Hadoop provides a cost-effective solution for storing and processing structured, semi- and unstructured data with no format requirements.
Key Big Data Use Cases for Hadoop
- New data formats – Utilize new forms of semi- and unstructured data such as streaming audio and video, social media, sentiment and clickstream data that can’t be ingested into the Enterprise Data Warehouse (EDW). This data can provide more accurate analytic decisions in response to today’s new technologies such as Internet-of-Things (IOT), artificial intelligence(AI), cloud and mobile.
- Data lake analytics: Provide a platform for real-time, self-service access and advanced analytics for data users like data scientists, line of business owners (LOBs) and developers. The Hadoop-based data lake is the future of data science, an interdisciplinary field that combines machine learning, statistics, advanced analysis and programming.
- Data offload and consolidation: Optimize your Enterprise Data Warehouse (EDW) and streamline costs by moving “cold” or data not currently in use to a Hadoop-based data lake. Consolidating by moving siloed data to the data lake decreases costs, increases accessibility and drives better, more accurate decisions.
Learn more about Big Data
In the spotlight
Get started with Apache Hadoop®
IBM, in partnership with Hortonworks, offers Hortonworks Data Platform (HDP), a secure, enterprise-ready open source Hadoop distribution based on a centralized architecture. HDP, when used with IBM Db2 Big SQL, addresses a range of data-at-rest and data-in-motion use cases, provides data federation across the organization, powers real-time customer applications, and delivers robust analytics accelerating analytic decisioning.
Accelerate big data collection and dataflow management
Hortonworks DataFlow (HDF) for IBM, powered by Apache NiFi, is the first integrated platform that solves the challenges of collecting and transporting data from a multitude of sources. HDF for IBM enables simple, fast data acquisition, secure data transport, prioritized data flow and clear traceability of data from the edge of your network to the core data center. It uses a combination of an intuitive visual interface, a high-fidelity access and authorization mechanism and an always-on chain of custody (data provenance) framework.
Accelerated and Stable Apache Hadoop®
The best way to move forward with Hadoop is to choose an installation package that simplifies interoperability. The Open Data Platform Initiative (ODPi) is a multi-vendor standards association focused on advancing the adoption of Hadoop in the enterprise by promoting the interoperability of big data tools. ODPi simplifies and standardizes the Apache Hadoop big data ecosystem with a common reference specification called the ODPi Core.
Learn more about ODPi