Machine Learning Engineer @ Triunity Software, Inc. | Jobright.ai
JOBSarrow
RecommendedLiked
0
Applied
0
External
0
Machine Learning Engineer jobs in San Francisco Bay Area
Be an early applicantLess than 25 applicantsPosted by Agency
company-logo

Triunity Software, Inc. · 12 hours ago

Machine Learning Engineer

ftfMaximize your interview chances
Information Technology & Services
check
H1B Sponsor Likelynote
Hiring Manager
Prashant Rathore
linkedin

Insider Connection @Triunity Software, Inc.

Discover valuable connections within the company who might provide insights and potential referrals.
Get 3x more responses when you reach out via email instead of LinkedIn.

Responsibilities

Hands-on code mindset with deep understanding in technologies / skillset and an ability to understand larger picture.
Sound knowledge to understand Architectural Patterns, best practices and Non-Functional Requirements.
Overall, 8-10 years of experience in heavy volume data processing, data platform, data lake, big data, data warehouse, or equivalent.
5+ years of experience with strong proficiency in Python and Spark (must-have).
3+ years of hands-on experience in ETL workflows using Spark and Python.
4+ years of experience with large-scale data loads, feature extraction, and data processing pipelines in different modes – near real time, batch, realtime.
Solid understanding of data quality, data accuracy concepts and practices.
3+ years of solid experience in building and deploying ML models in a production setup. Ability to quickly adapt and take care of data preprocessing, feature engineering, model engineering as needed.
3+ years of experience working with Python deep learning libraries like any or more than one of these - PyTorch, Tensorflow, Keras or equivalent.
Prior experience working with LLMs, transformers. Must be able to work through all phases of the model development as needed.
Experience integrating with various data stores, including:
SQL/NoSQL databases
In-memory stores like Redis
Data lakes (e.g., Delta Lake)
Experience with Kafka streams, producers & consumers.
Required: Experience with Databricks or similar data lake / data platform.
Required: Java and Spring Boot experience with respect to data processing - near real time, batch based.
Familiarity with notebook-based environments such as Jupyter Notebook.
Adaptability: Must be open to learning new technologies and approaches.
Initiative: Ability to take ownership of tasks, learn independently, and innovate.
With technology landscape changing rapidly, ability and willingness to learn new technologies as needed and produce results on job.

Qualification

Find out how your skills align with this job's requirements. If anything seems off, you can easily click on the tags to select or unselect skills to reflect your actual expertise.

Data ProcessingPythonSparkMachine Learning ModelsETL WorkflowsDatabricksData QualityData AccuracyDeep Learning LibrariesSQL DatabasesNoSQL DatabasesRedisData LakesKafka StreamsJavaSpring BootJupyter Notebook

Required

Hands-on code mindset with deep understanding in technologies / skillset and an ability to understand larger picture.
Sound knowledge to understand Architectural Patterns, best practices and Non-Functional Requirements
Overall, 8-10 years of experience in heavy volume data processing, data platform, data lake, big data, data warehouse, or equivalent.
5+ years of experience with strong proficiency in Python and Spark (must-have).
3+ years of hands-on experience in ETL workflows using Spark and Python.
4+ years of experience with large-scale data loads, feature extraction, and data processing pipelines in different modes – near real time, batch, realtime.
Solid understanding of data quality, data accuracy concepts and practices.
3+ years of solid experience in building and deploying ML models in a production setup. Ability to quickly adapt and take care of data preprocessing, feature engineering, model engineering as needed.
3+ years of experience working with Python deep learning libraries like any or more than one of these - PyTorch, Tensorflow, Keras or equivalent.
Prior experience working with LLMs, transformers. Must be able to work through all phases of the model development as needed.
Experience integrating with various data stores, including: SQL/NoSQL databases, In-memory stores like Redis, Data lakes (e.g., Delta Lake)
Experience with Kafka streams, producers & consumers.
Required: Experience with Databricks or similar data lake / data platform.
Required: Java and Spring Boot experience with respect to data processing - near real time, batch based.
Familiarity with notebook-based environments such as Jupyter Notebook.
Adaptability: Must be open to learning new technologies and approaches.
Initiative: Ability to take ownership of tasks, learn independently, and innovate.
With technology landscape changing rapidly, ability and willingness to learn new technologies as needed and produce results on job.

Preferred

Ability to pivot from conventional approaches and develop creative solutions.

Company

Triunity Software, Inc.

twitter
company-logo
Triunity is a Product Development, Staff Augmentation, and Consulting Services company providing solutions and services in North America.

H1B Sponsorship

Triunity Software, Inc. has a track record of offering H1B sponsorships. Please note that this does not guarantee sponsorship for this specific role. Below presents additional info for your reference. (Data Powered by US Department of Labor)
Distribution of Different Job Fields Receiving Sponsorship
Represents job field similar to this job
Trends of Total Sponsorships
2023 (3)
2022 (2)
2021 (1)
2020 (1)

Funding

Current Stage
Early Stage

Leadership Team

leader-logo
Manohar Suryavanshi
Co-Founder & CTO
linkedin
leader-logo
Rocky Chen
Partner
linkedin
Company data provided by crunchbase
logo

Orion

Your AI Copilot