-
Duration
3 days -
Goal
- Learn and become familiar with the architecture of Apache Hadoop
- Core components of Hadoop and the Hadoop Ecosystem
- How HDFS and MapReduce function
- Basic skills for developing MapReduce and simple big data applications
- Learn how to use resources for defining data workflows
- Get to know the limits of Hadoop and MapReduce
-
Target group
System architects, developers, anyone interested in big data. -
Contents
- Reason and Purpose of Using Hadoop
- Attributes of the Hadoop Architecture
- Core Concepts of Hadoop
- Functionality of MapReduce
- Main Components of the Hadoop Ecosystem: HDFS, MapReduce, Pig, Hive, HBase, Zookeeper, Flume, Cascading, etc.
- Development of MapReduce Applications
- Data Import and Export
- Automation of Data Workflows
- Data Serialization / De-serialization with Avro
- Hadoop APIs
- Best Practices and Patterns
-
Methods
Presentation with practical exercises. -
Requirements
Basic knowledge of IT.
Contact to our service center
+49 (0) 711 90363245
+41 (0) 584 595795
+43 (01) 33 2353160
+41 (0) 584 595454
Place | Date | Language | Price | |
---|---|---|---|---|
In-house training courses on request | Inquiry |
- Guaranteed to take place
- The course will definitely take place if you make a booking
- There are no free places left on this course. If you still book it, we will place you on the waiting list.
- The course price is shown in the currency EUR. For orders from Switzerland, we convert the price into CHF and take into account the appropriate VAT rate. We are also happy to assist you with your order by telephone: CH +41 58 459 57 95 or DE +49 711 903 632 45.