23 Data Engineer jobs in Egypt
Network Big Data Engineer (vois)
Posted today
Job Viewed
Job Description
The Big Data Engineer provides expert guidance and delivers through self and others to:
1. Integrate the necessary data from several sources in the Big Data Programme necessary for analysis and for Technology actions;
3. Deliver and implement core capabilities (frameworks, platform, development infrastructure, documentation, guidelines and support) to speed up the Local Markets and tenants delivery in the Big Data Programme, assuring quality, performance and alignment to the Group technology blueprint of components releases in the platform
4. Support local markets, tenants and Group functions in obtaining benefiting business value from the operational data
Key accountabilities and decision ownership:
- Design and implement core platform capabilities, tools, processes, ways of working and conventions under agile development to support the integration of the LM and tenant’s data sourcing and use cases implementation, towards reusability, to easy up delivery and ensure standardisation across Local Markets deliverables in the platform.
- Support the distributed data engineering teams, including technical support and training in the Big Data Programme frameworks and ways of working, revision and integration of source code, support to releasing and source code quality control
- Working with the Group architecture team to define the strategy for evolving the Big Data capability, including solution architectural decisions aligned with the platform architecture
- Defining the technologies to be used on the Big Data Platform and investigating new technologies to identify where they can bring benefits
Core competencies, knowledge and experience:
- Experience building systems to perform real-time data processing using Spark Streaming, Flink, Storm or Heron data processing frameworks, and Kafka, Beam, Dataflow, Kinesis or similar data streaming frameworks;;
- Experience with common SDLC, including SCM, build tools, unit, integration, functional and performance testing from automation perspective, TDD/BDD, CI and continuous delivery, under agile practises
- Experience working in large-scale multi tenancy big data environments;
Must have technical / professional qualifications:
- Expert level experience with Hadoop ecosystem (Spark, Hive/Impala, HBase, Yarn); desirable experience with Cloudera distribution; experience with similar cloud provider solutions also considered (AWS, GCP, Azure)
- Strong software development experience in Scala and Python programing languages; Java and functional languages desirable;
- Experience with Unix-based systems, including bash scripting
- Experience with columnar data formats
- Experience with other distributed technologies such as Cassandra, Splunk, Solr/ElasticSearch, Flink, Heron, Bean, would also be desirable.
_VOIS #Movewithus
Data Engineer

Posted 28 days ago
Job Viewed
Job Description
Data Engineer
**Job Description:**
**ESSENTIAL DUTIES & RESPONSIBILITIES**
+ Apply the knowledge of engineering on electronic products to develop the newly introduced parts with technical parameters in different areas daily.
+ Apply the knowledge of engineering in different data development stages, including but not limited to Part number decoding, Electrical/ performance characteristics, compliance, and Lifecycle data collection.
+ Apply the knowledge of engineering on electronic products to track updates in technical Documents revisions and identify the high technical changes to update on the Database.
+ Develop Customer requests after handling them to enter in the Database.
+ Apply a checklist to keep and maintain our data with high quality.
**SKILLS & EXPERIENCE REQUIREMENTS:**
+ Bachelor's degree in Electrical or Electronics Engineering.
+ 0 to 1 year of relevant experience.
+ Familiarity with electronic components and systems.
+ Strong English language skills (written, read, and spoken).
+ Good knowledge of at least one programming language (Python or Java).
+ Proficient in data analysis using Excel, Power BI, or Python.
+ Ability to write code and scripts to automate data processes using languages such as Python, Java, or Scala.
+ Commitment to continuously optimizing data processes for improved efficiency and scalability.
+ Experience in implementing data quality checks and monitoring systems to proactively detect and resolve issues.
+ Proven experience working on projects utilizing technologies such as Python, Power BI, or related tools.
**Location:**
EG-Banha, Egypt (EI Corniche Street)
**Time Type:**
Full time
**Job Category:**
Engineering and Technology
Arrow Electronics, Inc.'s policy is to provide equal employment opportunities to all qualified employees and applicants without regard to race, color, religion, age, sex, marital status, gender identity or expression, sexual orientation, national origin, disability, citizenship, veran status, genetic information, or any other characteristics protected by applicable state, federal or local laws. Our policy of equal employment opportunity and affirmative action applies to all employment decisions personnel policies and practices, or programs.
Data Engineer
Posted today
Job Viewed
Job Description
**We’re looking for a Data Engineer**
**A bit about us**
**Main responsibilities**
- Data Quality Assurance: Identifying and eliminating anomalies by means of data profiling and cleansing.
- Data Integration and Consolidation: Bringing information together from multiple systems into a single repository
- Data Monitoring and Reporting: Proactively reviewing and evaluating data and its quality to ensure that it is fit for purpose
**A day in your life**
**Data Quality Assurance**
- Develop, implement and maintain data quality assurance processes to ensure data accuracy, completeness, consistency, and validity
- Perform data profiling, data cleansing, and data enrichment activities to identify and fix data errors and limitations
- Collaborate with cross-functional teams to define data quality standards and ensure adherence to them
**Data Integration and Consolidation**
- Develop and implement a data integration strategy to connect all data sources to a single source of truth
- Work with internal and external stakeholders to identify and prioritize data sources and design data models to support data consolidation
- Build and maintain ETL (Extract, Transform, Load) pipelines to move data from various sources into the centralized data repository
**Data Monitoring and Reporting**
- Develop and implement data monitoring and reporting mechanisms to track data quality, data availability, and data usage metrics
- Establish and maintain data governance policies and procedures to ensure compliance with regulatory requirements and industry best practices
- Create dashboards and alerts to provide visibility into data quality and usage, and proactively identify and resolve issues.
**Requirements**:
**Experience needed**
- 2+ years of proven experience as a Data Engineer
- 1+ years of proven experience in using HubSpot
- Relevant experience with startups is a BIG plus
- You’re adept at queries, report writing, and presenting findings
- You have strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
- You’re a technical expert in data models, database design development, data mining, and segmentation techniques
- You have extensive knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS, etc)
- General assessment form (5-10 minutes)
- Chemistry meeting (15-30 minutes)
- Technical interview (30-60 minutes)
- Technical assessment
- Offer extended to successful applicants
**Benefits**
**Role benefits**
- Work from anywhere
- Unlimited paid sick days
- No clocking in/out
- ESOP shares
- Laptop fund
- Fast promotions (like really fast)
- Enjoy monthly Pizza Fridays
- Learn anything online and expense it on us
- Travel for company events and get reimbursed
**How to gain extra Gooru points**
- Demonstrate your entrepreneurial, hustling, and energetic spirit
- Prove your ability to take full ownership of your role without the need to be micromanaged
- Conduct proper research about AlGooru prior to your interviews with us, we love people who already know about us.
Data Engineer-Data Integration
Posted 14 days ago
Job Viewed
Job Description
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.
You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.
**Your role and responsibilities**
Designs and builds solutions to move data from operational and external environments to the business intelligence, data warehouse and data lake environments using Snowflake and Azure/Fabirc, DBT and BW. Skills include designing and developing extract, transform and load (ETL) and extract, load and transform (ELT) pipelines. Experience includes full lifecycle implementation of the technical components of a data warehouse, data lake and business intelligence solutions.
Position Keywords
#snowflake #dbt #azure #python #sql #modeling
**Required technical and professional expertise**
* 5-7 years of full-time working experience in Data Engineering domain.
* Expertise in Snowflake, Azure/Fabric, DBT, BW, Python, SQL and must have excellent understanding of databases.
* Must have experience with Cloud data integration.
* Good knowledge of Data Modeling, industrial modeling, Analytic model, data architecture, dimensional modeling, 3rd normal form model, Kimball model, Inmon model, Data Integration and Business Intelligence.
* Demonstrated ability in solutioning covering data ingestion, data cleansing, ETL, data mart creation and exposing data for consumers.
**Preferred technical and professional experience**
* SnowPro Advanced: Data Engineer Certification.
* Microsoft Certified: Fabric Data Engineer Associate Certification.
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Data Engineer-Data Integration

Posted 26 days ago
Job Viewed
Job Description
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.
You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.
**Your role and responsibilities**
Designs and builds solutions to move data from operational and external environments to the business intelligence, data warehouse and data lake environments using Snowflake and Azure/Fabirc, DBT and BW. Skills include designing and developing extract, transform and load (ETL) and extract, load and transform (ELT) pipelines. Experience includes full lifecycle implementation of the technical components of a data warehouse, data lake and business intelligence solutions.
Position Keywords
#snowflake #dbt #azure #python #sql #modeling
**Required technical and professional expertise**
* 5-7 years of full-time working experience in Data Engineering domain.
* Expertise in Snowflake, Azure/Fabric, DBT, BW, Python, SQL and must have excellent understanding of databases.
* Must have experience with Cloud data integration.
* Good knowledge of Data Modeling, industrial modeling, Analytic model, data architecture, dimensional modeling, 3rd normal form model, Kimball model, Inmon model, Data Integration and Business Intelligence.
* Demonstrated ability in solutioning covering data ingestion, data cleansing, ETL, data mart creation and exposing data for consumers.
**Preferred technical and professional experience**
* SnowPro Advanced: Data Engineer Certification.
* Microsoft Certified: Fabric Data Engineer Associate Certification.
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Senior Data Engineer
Posted today
Job Viewed
Job Description
**Responsibilities**:
- Build end-to-end data pipelines to enable the training and running of machine learning models.
- Collaborate with the team to design, implement and maintain Data Products to improve our experimentation stack.
- Interpret trends and pattern, Conduct complex data analysis and report on results
- Work together with other engineers to stabilize and scale the technological stack of the experimentation team.
- Data Engineers in the Global Data tribe work closely with Data Scientists, Software Engineers to help them to productionalize their statistical models, and code into data pipelines.
Communicating with different teams regarding data consistency and data availability.
- Create proofs of concepts with new technologies and drive innovation
**Nice to have**:
- General Data Science knowledge: basic AI and Machine learning concepts.
- Experience working with Cloud environments: Google Cloud Platforms and/or Amazon Web Services
- Experience with CI/CD tools and frameworks like GitActions and Travis.
- Infrastructure as a code: Terraform
- Application containerization: Docker, K8s
- Experience with OOL (Java) and/or Golang.
- Experience with the following tools: Spark, BigQuery, Kubernetes.
- Be passionate about data.
Experience with Apache Airflow or similar technologies.
- Experience with the processing of large amounts of structured and unstructured data.
- Expert knowledge of SQL, (PL/SQL).
- Experience with stakeholders management and writing documentation.
- Good communication skills to present and report information to peers and stakeholders.
- Comfortable working and collaborating with a diverse group of people from different backgrounds.
Senior Data Engineer
Posted today
Job Viewed
Job Description
**Job Description**:
- Build scalable data pipelines to ingest data from a variety of data sources, identify critical data elements and define data quality rules.
- Develop and implement databases and data collection systems & Undertake to preprocess of structured and unstructured data.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
- Design and develop capabilities to deliver innovative and improved data solutions.
- Evaluate and improve data quality by implementing test cases, alerts, and data quality safeguards.
Job Requirements
- BSc in Computer Science, Engineering, or relevant field is a preferred
- +4 years of Experience in data processing, analysis, and problem-solving with large amounts of data.
- Good knowledge of relational data warehousing technologies especially in cloud data warehousing (e.g. Amazon Redshift, Google BigQuery, Snowflake, Vertica, etc.)
- Proficiency in at least one scripting language, Python is preferred.
- Experience in big data technologies like Apache Hadoop, Spark, Airflow, MongoDB, etc.
- Familiarity with the basic principles of distributed computing and data modeling.
- Strong experience with SQL and mobile apps.
- Proficient with Git.
**Salary**: From E£222,222.00 per month
**Education**:
- Bachelor's (preferred)
**Experience**:
- mobile apps: 2 years (preferred)
Be The First To Know
About the latest Data engineer Jobs in Egypt !
Senior Data Engineer
Posted today
Job Viewed
Job Description
**Summary**:
Raisa is an energy fintech company that uses proprietary technologies (in-house tech, machine learning microservices, securitization) to manage large investments in the United States. With over $2 billion of private funding, Raisa has built a diverse portfolio of oil and gas assets. We are passionate about innovation that leverages our team’s capabilities via proprietary technology and creates extraordinary results for all stakeholders.
We are looking for a highly motivated **senior data engineer** to join our team at Raisa. As a senior data engineer, your main goal is to be a few steps ahead of other teams, supporting them with the required data infrastructure, tools and data pipelines.
**Responsibilities**:
- Design, build and maintain Raisa’s data pipelines and data warehouse.
- Extract data from structured, semi-structured and unstructured datasets.
- Build dashboards and reports.
- Design and implement complex data models using SQL, Python and Apache Spark.
- Manage Raisa’s cloud data infrastructure on Azure.
- Design and build useful data tools and frameworks that can serve different teams.
- Communicate with different teams to understand their data requirements and their data infrastructure challenges.
**Must have**:
- 4-8 years of experience in data engineering
- Experienced in writing and optimizing complex SQL queries
- Experienced in building complex data pipelines using SQL and Python
- Experienced in modern data warehousing concepts and working with cloud data warehouses like Redshift, Big Query or Snowflake (we are on Snowflake)
- Experienced in managing cloud infrastructure AWS, GCP or Azure (we are on Azure)
- Experienced with MLops including deploying and monitoring ML models in production
- Familiar with data pipeline orchestration tools like Airflow, Dagster or Prefect
- Comfortable working with ambiguous requirements
- Able to take initiatives and lead the way
- Focused on the details and having passion for data quality
- Passionate about code efficiency and cloud cost optimization
**Nice to have**:
- Familiar with machine learning concepts
- Experienced with spatial data analysis (we use Esri)
- Experienced with serverless computing, Containers, Docker and Kubernetes
- Experienced in designing and building data pipelines using Apache Spark Databricks
- Experienced in configuring Spark/Dask clusters
Data Engineer Ii
Posted today
Job Viewed
Job Description
**Summary**:
Raisa is an energy fintech company that uses proprietary technologies (in-house tech, machine learning microservices, securitization) to manage large investments in the United States. With over $2 billion of private funding, Raisa has built a diverse portfolio of oil and gas assets. We are passionate about innovation that leverages our team’s capabilities via proprietary technology and creates extraordinary results for all stakeholders.
We are looking for a highly motivated **data engineer** to join our team at Raisa. As a data engineer, your main goal is to be a few steps ahead of other teams, supporting them with the required data infrastructure, tools and data pipelines.
**Responsibilities**:
- Design, build and maintain Raisa’s data pipelines and data warehouse.
- Extract data from structured, semi-structured and unstructured datasets.
- Build dashboards and reports.
- Design and implement complex data models using SQL, Python and Apache Spark.
- Manage Raisa’s cloud data infrastructure on Azure.
- Design and build useful data tools and frameworks that can serve different teams.
- Communicate with different teams to understand their data requirements and their data infrastructure challenges.
**Must have**:
- 2-4 years of experience in data engineering
- Experienced in writing and optimizing complex SQL queries
- Familiar with building complex data pipelines using SQL and Python
- Familiar with AWS, GCP or Azure (we are on Azure)
- Comfortable working with ambiguous requirements
- Able to take initiatives and lead the way
- Focused on the details and having passion for data quality
- Passionate about code efficiency and cloud cost optimization
**Nice to have**:
- Experienced in modern data warehousing concepts and working with cloud data warehouses like Redshift, Big Query or Snowflake (we are on Snowflake)
- Familiar with MLops including deploying and monitoring ML models in production
- Familiar with data pipeline orchestration tools like Airflow, Dagster or Prefect
- Familiar with machine learning concepts
- Familiar with spatial data analysis (we use Esri)
- Familiar with serverless computing, Containers, Docker and Kubernetes
- Familiar with designing and building data pipelines using Apache Spark/Databricks
- Familiar with configuring Spark/Dask clusters
Senior Data Engineer
Posted today
Job Viewed
Job Description
- Develop, build, and implement new data collection systems where needed.
- Provide training and guidance on new systems that are being implemented.
- Keep the code used on file for future upgrades and repairs to the system.
- Manage and convert raw data into understandable information for other professionals.
- Ensure data is accessible to the required personnel to ensure the ease of understanding and access to the data needed to make informed business decisions.
- Train and mentor junior engineers.
- Ensure that implemented data systems have the relevant security.
**Requirements**:
- 6+ years experience in data engineering
- excellent SQL and Python
- problem solving skills
- Experience in Nifi is a huge plus