What is the impact you want to make?
Unleash your biggest strengths, apply skills & knowledge, learn new things, connect with your peers and build your career with us!
Why rinf.tech?
#EngineerOfTheFuture, #PeopleofManyTalents
- At rinf.tech, you’ll encounter friendly people who are eager to explore and reinvent the world of technology.
- We encourage ideas - we like to share and learn from each other. We’re all in for curious & ambitious people.
#GrowOpportunities
- We continuously invest in developing core teams focused on technologies like Blockchain, AI, and IoT - www.rinf.tech/careers/core-blockchain-and-ai-teams/
- Our Technical Management team, possesses a robust technical background. Many of our team members have advanced to strategic roles through internal promotions.
- In a state of mutual willingness to share & grow, our RINFers commit to a minimum tenure of 2.5 years on a project.
#EngineeringExcellence
- Fail fast, learn fast: we experiment, we iterate, we know when to stop and we don't repeat the same mistakes.
- The right technology stack for the right problem: we don't force technology choices just because we know them; our focus is on solving problems, not on pushing predefined stacks.
#Innovation
- Adapta Robotics is a successful spin-off born through an R&D project within rinf.tech www.adaptarobotics.com/
Why do we do what we do?
We inspire one another to share our tech-works in this amazing and abundant world. So we became developers, innovators, thinkers, software builders, and hardware makers!
Our Vision!
Founded in 2006 with 650+ engineers & global presence (8 delivery centers in Europe & North America) we strive to become a leading East-European technology partner for growing organizations in need of digital transformation of their products and services!
What you’ll do
- Design and implement data pipelines to extract, transform, and load (ETL) data from various sources (databases, APIs, logs, etc.).
- Optimize data workflows for scalability, reliability, and performance.
- Ensure data consistency and quality throughout the pipeline.
- Cleanse, transform, and preprocess raw data to make it suitable for model training.
- Perform feature engineering to create relevant features for machine learning models.
- Handle missing data, outliers, and anomalies appropriately.
- Implement and maintain data storage solutions (relational databases, data lakes, NoSQL databases).
- Ensure data security and access controls.
- Manage data versioning and lineage.
- Tune data processing jobs for efficiency and cost-effectiveness.
- Parallelize data transformations to handle large-scale datasets.
- Monitor and troubleshoot pipeline performance.
- Work closely with data scientists, analysts, and software engineers to understand data requirements.
- Document data pipeline processes, dependencies, and assumptions
What you need to be successful
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related fields.
- Minimum 5 years of relevant experience in data engineering.
- Proficiency in SQL for data extraction and manipulation.
- Experience with ETL tools and frameworks (e.g., Apache NiFi, Apache Airflow, Talend).
- Knowledge of distributed computing frameworks (e.g., Apache Spark).
- Understand data modeling concepts (relational, star schema, snowflake schema).
- Design efficient data structures for storage and retrieval.
- Experience with cloud providers (AWS, Azure, Google Cloud) for data storage and processing.
- Familiarity with serverless computing (AWS Lambda, Google Cloud Functions).
- Understanding of big data technologies (Hadoop, Hive, HBase).
- Knowledge of stream processing frameworks (Apache Kafka, AWS Kinesis).
- Version control using Git.
- Analytical mindset with attention to detail.
- Strong problem-solving skills.
- Effective communication and teamwork.
Next Steps for you!
- Apply
- CV screening
- HR Interview
- Technical Interview
- Offer presented by our CEO
Meet us!
Let's meet! We invite you to drop by anytime for a tour of our office, without any commitment.
