We use cookies and other tracking technologies to improve your browsing experience on our site, analyze site traffic, and understand where our audience is coming from. To find out more, please read our privacy policy.

By choosing 'I Accept', you consent to our use of cookies and other tracking technologies.

We use cookies and other tracking technologies to improve your browsing experience on our site, analyze site traffic, and understand where our audience is coming from. To find out more, please read our privacy policy.

By choosing 'I Accept', you consent to our use of cookies and other tracking technologies. Less

We use cookies and other tracking technologies... More

Login or register
to publish this job!

Login or register
to save this job!

Login or register
to save interesting jobs!

Login or register
to get access to all your job applications!

Login or register to start contributing with an article!

Login or register
to see more jobs from this company!

Login or register
to boost this post!

Show some love to the author of this blog by giving their post some rocket fuel 🚀.

Login or register to search for your ideal job!

Login or register to start working on this issue!

Login or register
to save articles!

Login to see the application

Engineers who find a new job through WorksHub average a 15% increase in salary 🚀

You will be redirected back to this page right after signin

Data Engineer (Japan based)

Tokyo, Japan

06 May, 2022

Compensation

„7.5M - 10.1M

Contract type

Full time
Sponsorship offered

Technologies & frameworks

  • Git
  • Data Engineering
  • Python
  • Data Engineer
  • etl pipelines
  • JVM
  • Spark

Benefits & perks

  • Pension contributions
  • Diverse management team
  • Japanese speaking lessons
  • 2 months relocation/housing
Change how payments are made in Japan with a real time distributed platform, handling millions of users -

Role overview

Please note that you must be eligible to work in Japan.

The Data management team in the Data department is the backbone of all data-related activities at Paidy, responsible for owning, managing, governing, cataloging, and provisioning a single source of truth for all data that has business value to Paidy. The team is also responsible for building and maintaining data pipelines for critical reports that keep Paidy's business running. 

We are seeking a data engineer to join this team and build together with us, focusing especially on building data pipelines for reporting. The day-to-day work involves discussing with business stakeholders to confirm reporting requirements, confirming metric definitions with the stakeholders and data and risk analysts, designing flexible data structures and data marts to efficiently transform the data, then authorizing, optimizing, deploying, and maintaining the queries at the core of the data pipeline to deliver a report. 

The ideal candidate will not only be able to handle the above day-to-day work, but also proactively identify and propose improvements to the reporting workflow itself, e.g. finding ways to optimize ETL pipelines and the ETL architecture behind reports with similar upstream data, proposing new processes to improve the report creation process, trying new techniques or technologies to validate report results and minimize report maintenance time, etc. 

Key Role & Responsibilities

  • Consult with business stakeholders on reporting needs, collect requirements, document report background and technical specification. Work with the same stakeholders to collect feedback on report contents. 
  • Conduct ad-hoc analysis to confirm requirements and important report metrics definitions, often working together with analysts in credit risk and/or data science. Identify, investigate, and solve data discrepancies in reports. 
  • Design and document ETL pipelines and data structures upstream from the final generated reports. Find ways to optimize the ETL in the pipeline and adjust data models/ structures upstream to increase flexibility, efficiency, or easy of achieving high accuracy in reports 
  • Monitor report generation, provide first line support for any issues with deployed reporting pipelines and generated reports 
  • Working with data scientists, design and deploy BI tool dashboards to add value to reporting pipelines for stakeholders (we use Looker) 
  • Contribute to data lake/warehouse development and other data integration projects

Skills and Requirements

  • A Bachelor Degree in Computer Science, Information Technology or a related subject 
  • 3+ year experience in data engineering or software development of data-intensive applications 
  • Strong skills in Python, Git, Docker, SQL, some ETL tool / job orchestration tool such as Airflow or Prefect (we use Prefect), ETL pipelines. 
  • Knowledge and experience with either Scala or Java a huge plus 
  • Knowledge and experience with Apache Spark a plus 
  • Experience on financial/accounting products or services, in particular technical implementation impact and consideration 
  • Creative Problem-solving skills, a passion for programming and solving problems with code

The Paidy team will ask about your user experiences with Paidy Apps during the interview. Please download Paidy App and try it out!

For those who are not able to download Paidy App, due to the regional restrictions, please be advised that you download the similar App, such as Klarna, Afterpay, Affirm and so forth, and come up with your opinions on these applications and services.

  • 50-249

Paidy was founded with the mission to create a world of “We remove the barriers, embrace simplicity”, offering its real-time monthly consolidated credit service all across Japan. Paidy started Japan’s first instant post-pay credit service for ecommerce consumers in October 2014. Paidy requires no pre-registration or credit card to use; Paidy consumers purchase products online using only a mobile phone number and email address (verification is established though a four-digit code via SMS or voice pin-code) and settle a single monthly bill for all their purchases, either at a convenience store, by bank transfer or auto debit. Paidy also supports multi-pay installments and subscriptions. There are currently almost 4,000,000 Paidy accounts in use (Oct 2020)!! Paidy has proved a powerful means of persuading first time buyers to transact online. Its proprietary models and machine learning mean that transactions are underwritten in seconds, with guaranteed payment to merchants. Paidy increases merchant revenues by reducing incomplete transactions, increasing conversion rates, boosting average order values, and facilitating easy repeat buying.

View 9 jobs
Engineers who find a new job through WorksHub average a 15% increase in salary.

Compensation

„7.5M - 10.1M

Contract type

Full time
Sponsorship offered

Technologies & frameworks

  • Git
  • Data Engineering
  • Python
  • Data Engineer
  • etl pipelines
  • JVM
  • Spark

Benefits & perks

  • Pension contributions
  • Diverse management team
  • Japanese speaking lessons
  • 2 months relocation/housing

Get hired!

Sign up now and apply for roles at companies that interest you.

Engineers who find a new job through WorksHub average a 15% increase in salary.

Start with GitHubStart with Stack OverflowStart with Email

Get hired!

Sign up now and apply for roles at companies that interest you.

Engineers who find a new job through WorksHub average a 15% increase in salary.

Start with GitHubStart with Stack OverflowStart with Email

Other roles that might interest you

Backend Technical Architect on our Atala team - www.atalaprism.io
Remote
Changing the way hundreds of millions of people interact with each other every day đŸ€
Sponsorship
Data Engineer (Japan based)