Description
Job Responsibilities:
1. Responsible for the design and development of Spark, Hadoop, and Flink computing platforms.
2. Build an efficient and stable computing platform and provide the business with the massive computing services required for big data analysis.
3. Responsible for the construction of data warehouse and machine learning platform.
Job Requirements:
• Must be a bachelor's degree holder, computer-related major.
• With more than 2 years of experience in big data platform development, operation, and maintenance.
• Have a deep understanding of the calculation principles of Spark and Hadoop platforms, and be able to read Spark and Hadoop source code.
• Contributors to the community with source code are preferred.
• Familiar with Spark stream computing framework or other open-source real-time computing frameworks.
• Has experience in data warehouse construction.
• Big data algorithm experience, machine learning platform construction experience is preferred.
• Proficient in JAVA, MapReduce principles and secondary development related to data analysis.
• Full of enthusiasm for new technology, good learning ability, teamwork ability and communication ability.
• Can withstand greater work pressure.
Requirements
Minimum education level: Bachelor ́s Degree
Years of experience: 3
Language(s): English
Knowledge: Java
Availability for travel: No
Availability for change of residence: No
Job Responsibilities:
1. Responsible for the design and development of Spark, Hadoop, and Flink computing platforms.
2. Build an efficient and stable computing platform and provide the business with the massive computing services required for big data analysis.
3. Responsible for the construction of data warehouse and machine learning platform.
Job Requirements:
• Must be a bachelor's degree holder, computer-related major.
• With more than 2 years of experience in big data platform development, operation, and maintenance.
• Have a deep understanding of the calculation principles of Spark and Hadoop platforms, and be able to read Spark and Hadoop source code.
• Contributors to the community with source code are preferred.
• Familiar with Spark stream computing framework or other open-source real-time computing frameworks.
• Has experience in data warehouse construction.
• Big data algorithm experience, machine learning platform construction experience is preferred.
• Proficient in JAVA, MapReduce principles and secondary development related to data analysis.
• Full of enthusiasm for new technology, good learning ability, teamwork ability and communication ability.
• Can withstand greater work pressure.
Requirements
Minimum education level: Bachelor ́s Degree
Years of experience: 3
Language(s): English
Knowledge: Java
Availability for travel: No
Availability for change of residence: No
Other Info
₱ 90,000.00 monthly · Pasay, National Capital Region · Today, 12:46 PM
Work type
Full Time
Work type
Full Time
Submit profile
WESEARCH Searchers and Staffers Corp.
About the company
WESEARCH Searchers and Staffers Corp. jobs
₱ 35,000.00 monthly · Central Visayas · 15 September (updated)
Position data development Team Leader recruited by the company WESEARCH Searchers and Staffers Corp. at , Joboko automatically collects the salary of ₱ 90,000.00 monthly, finds more jobs on Data Development Team Leader or WESEARCH Searchers and Staffers Corp. company in the links above
About the company
WESEARCH Searchers and Staffers Corp. jobs
₱ 35,000.00 monthly · Central Visayas · 15 September (updated)