Navaide · 9 hours ago
Data Visualization SME - Advana/Jupiter/Tableau/PowerBI/Qlik (DoW Clearance Required)
Navaide is a company dedicated to empowering organizations through innovative technology solutions. They are seeking a Visualization Specialist who will leverage technical knowledge and analytical skills to analyze complex datasets, collaborate with cross-functional teams, and deliver insights that support project objectives.
Cyber SecurityData ManagementInformation Technology
Responsibilities
Leverage technical knowledge and analytical skills to gather, clean, and analyze complex datasets
Employ statistical techniques to interpret and validate data trends, patterns, and anomalies
Collaborate with cross-functional teams to understand project requirements and deliver insights that support the contract's objectives
Prepare and deliver reports and presentations that effectively communicate analytical findings to both technical and non-technical stakeholders
Prepare detailed analytical reports and present results, recommendations, and key metrics
Work closely with cross-functional teams to identify opportunities for improving decision-making through data
Stay current on industry trends, emerging technologies, and best practices in data analysis
Track all assigned tasks and provide the team leader with updates on a regular basis
Become proficient in the AGILE/SCRUM methodology
Demonstrate expert proficiency in Excel
Demonstrate expert proficiency with a data visualization platform such as Foundry, Tableau, PowerBI, and Qlik
Utilize data visualization platforms to create charts, pivot tables, interactive dashboards, and reports to facilitate decision-making processes
Design, develop, and maintain Extract, Transform, Load (ETL) and Extract, Load, Transform (ELT) pipelines using Databricks and Delta Lake, ensuring data quality through Medallion Architecture (Bronze, Silver, Gold)
Manage data ingestion processes, ensuring compliance and traceability with tools like Collibra for metadata and data governance
Collaborate with data engineers, data scientists, analysts, and business stakeholders to deliver tailored data solutions for reporting, machine learning, and operational needs
Optimize the performance of data storage solutions, leveraging Delta Lake tables for scalability, schema evolution, and time travel capabilities
Implement real-time data streaming solutions using Databricks Structured Streaming, ensuring data security and compliance through governance frameworks
Conduct exploratory data analysis (EDA) and maintain data lineage using Collibra and other governance tools
Collaborate on model deployment processes by providing compliant, traceable data pipelines
Building data ETL and ELT pipelines in Navy/DOD common data environments such as Advana/Jupiter for use in reporting, dashboards, and/or modeling
Responsible for maintaining and debugging data pipelines, including deep dives into root causes and implementing solutions for failures
Creatively developing database documentation and leveraging automated metadata collection techniques while detailing upstream transforms when necessary
Dissecting complex SQL queries to optimize and repurpose is a must-have
Collaborating with non-technical stakeholders to gather and refine data requirements
Communicate in a professional manner with all team members and stakeholders
Demonstrate expert level knowledge of quality assurance processes
Ensure the quality of all data visualizations, interactive dashboards, reports, and deliverables
Demonstrate intermediate proficiency in the Advana and Jupiter platforms
Adhere to data security and privacy protocols, ensuring the confidentiality and integrity of sensitive information
Establish and maintain data governance practices, ensuring data quality and integrity across multiple sources
Qualification
Required
Active DOD clearance
Bachelor's degree in a relevant field e.g., Data Science, Computer Science, Statistics, Engineering, Operational Research or related discipline
Minimum of 5 years of dedicated data engineering experience with proven experience in building and managing data pipelines and architectures
5+ years of experience in data analysis, data mining, or business intelligence
3+ years of demonstrated experience delivering robust data and analytical solutions using PySpark, SQL, and/or Python in big data instances such as Databricks or Snowflake
Experience developing self-service analytics using tools such as iQuery and Redash
Intermediate to expert level experience with data visualization tools like Foundry, Tableau, PowerBI, Qlik and/or advanced Excel techniques (macros, pivot tables, VBA, etc...)
Expert Proficiency in programming languages Python or SQL
Experience with cloud-based platforms like AWS, Azure, or Google Cloud for data storage and processing
Familiarity with big data frameworks such as Hadoop, Spark, or Kafka
Expert proficiency in data cleaning, transformation, and statistical analysis
Understanding business intelligence best practices and KPIs
Solid understanding of data warehouse design and ETL best practices
Strong analytical and problem-solving skills, with attention to detail in managing large datasets
Strong ability to communicate complex data insights to non-technical stakeholders in a concise and meaningful manner
Strong ability to draw meaningful conclusions from data
Deep analytical acumen with the ability to balance multiple projects and maintain key stakeholder engagement
Attention to detail and a high standard for accuracy when handling large datasets
U.S. work authorization is required
Demonstrated intermediate to expert level experience applying descriptive statistics to solve business problems
Demonstrated ability to work collaboratively in a team-oriented environment
Demonstrate a willingness to learn and adapt to emerging technologies, including Advana Jupiter platforms
Previous experience in a client-facing role with a focus on government stakeholder communication and collaboration
Preferred
Intermediate experience with statistical methods such as hypothesis testing, confidence intervals, and simple linear regressions
Intermediate knowledge of AGILE/SCRUM methodologies (e.g., Kanban, Scrum)
Previous experience leading a small project or contributing to project management efforts
Expert proficiency with Foundry
Master's degree in Data Engineering, Computer Science, or related discipline
Experience working with NoSQL databases such as Cassandra or MongoDB
Familiarity with containerization and orchestration tools such as Docker and Kubernetes
Knowledge of data governance and data privacy best practices
Experience in working with version control systems like Git and in Agile development environments
Proficiency with Databricks and Advana
Benefits
Competitive compensation and comprehensive benefits, including medical, dental, and vision coverage
Flexible time off
Professional development opportunities
Company perks such as flex spending, wellness initiatives, etc.
401(K) matching
Company
Navaide
NavAide provides business systems, data management, enterprise, energy, and cyber solutions.
Funding
Current Stage
Growth StageCompany data provided by crunchbase