Endpoint
is hiring
Senior Data Analyst
About Our Company
Endpoint is a digital title and settlement company built from the ground up to make home closing easy for all. Founded in 2018 by a diverse group of tech and real estate veterans, Endpoint develops technology that streamlines home closing for real estate agents, buyers and sellers, and empowers proptech companies and investors looking to scale their closing operations.
Job Description & Responsibilities
At Endpoint, data drives our strategic decisions, with our Data Engineering and Platform team playing a crucial role in this process.
As we experience rapid growth, having accessible and actionable information is essential. We are looking for a Senior Data Analyst to join our dynamic team. In this role, you will collaborate closely with Product, Operations, Engineering, Revenue, and Business Development teams to build and maintain data solutions that provide valuable insights and support business recommendations. You will leverage your technical and analytical skills to manage stakeholder expectations and drive consensus. Additionally, you’ll play a key role in shaping the data infrastructure roadmap, ensuring that data governance and alignment are maintained across the company.
As a Senior Data Analyst, you will use your skills to
- Create, build, and manage highly scalable and efficient data pipelines for processing and integrating data from various internal and external sources
- Design and maintain data infrastructure to support high-volume data processing, ensuring reliability, scalability, and performance to contribute significantly to the company's success
- Collaborate closely with cross-functional teams, including Data Science, Product, and Engineering, to understand their data requirements and deliver robust solutions, emphasizing the need for solid communication and collaboration skills
- Conduct data profiling, cleansing, and transformation to ensure data accuracy, integrity, and availability across systems
- Identify opportunities to automate data processes and optimize existing pipelines to improve performance and reduce costs
- Develop and maintain detailed documentation for data pipelines, data models, and workflows, ensuring alignment with industry best practices and company standards
- Contribute to developing internal tools and frameworks that enhance data accessibility and usability across the organization
- Support data governance initiatives by ensuring compliance with data quality standards and contributing to data stewardship efforts to maintain the integrity and reliability of our data
- Stay abreast of the latest technologies and industry trends, continuously improve data engineering practices within the team, and cultivate a culture of continuous learning and growth
- Develop and implement optimized data models for data warehouses and data marts to meet the needs of analytics and reporting teams
- Drive operational excellence through a metrics-driven approach
Requirements
You will come to the Endpoint with
- 5+ years of Python expertise: Specializing in data engineering tasks like building, optimizing, and automating data pipelines. Proficiency in essential Python libraries and frameworks for data processing
- Code quality & efficiency: Focus on writing clean, maintainable, and efficient Python code with strong practices in error handling, logging, and performance optimization for scalable solutions
- 5+ years of SQL experience: Advanced skills in SQL, including performance tuning
- Cloud platform proficiency: Extensive experience with AWS, GCP, or Azure, using Python to automate and optimize cloud infrastructure operations, including resource provisioning, security management, and cost control
- Data pipeline development: Proven ability to design and maintain scalable, high-performance data pipelines, utilizing tools like Prefect and dbt alongside Python
- Technical leadership: Ability to work closely with cross-functional teams to translate complex business needs into effective technical solutions, leveraging Python for impactful, data-driven decision-making
- Data orchestration: Experience in orchestrating ETL and reverse ETL processes using Python, with experience in tools like Prefect, Airflow, and Dagster to design complex data workflows
- Reliability & scalability focus: Commitment to ensuring the reliability and scalability of data systems through proactive monitoring, logging, and alerting, driven by Python solutions
- Data modeling & warehousing: Strong understanding of designing efficient data models and managing data warehouses, particularly on platforms like Snowflake, Big Query, and Redshift
Nice to have
- Hands-on experience with DBT for data transformation and modeling, enhancing the efficiency and maintainability of data workflows
- Experience with BI tools like ThoughtSpot, enabling you to support the creation of impactful data visualizations and self-service analytics
- Familiarity with implementing and managing Data Catalog tools to organize and govern data assets across the organization
What we offer
Why work at Endpoint?
- You’ll join a fast-growing company where you can make an impact
- You’ll work alongside industry experts and the brightest minds in tech to transform an industry
- We foster a vibrant, welcoming, and inclusive company culture
- You’ll be entrusted with responsibility and autonomy in your day-to-day work
- You’ll have opportunities to advance your career from within or for internal mobility
- Customizable benefits including Health, Dental, Vision, and 401K Match
- Flexible work options for specific roles
- Virtual and in-person team events
- We offer competitive compensation; Base pay is one part of your total compensation package in addition to annual bonus. This role pays between $125,000 and $170,000, and your actual base pay will depend on your skills, qualifications, and experience.