-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathglassdoor_Data_Engineer.csv
More file actions
We can't make this file beautiful and searchable because it's too large.
6925 lines (6401 loc) · 662 KB
/
glassdoor_Data_Engineer.csv
File metadata and controls
6925 lines (6401 loc) · 662 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
Job Title,Salary Estimate,Company Name,Location,Job Description,Rating,Size,Founded,Type of ownership,Industry,Sector,Revenue
Senior Data Engineer,$100K - $133K (Glassdoor est.),LOOP BELL TECH INC3.7 ★,"Fort Worth, TX","Contract: W2
Note: No C2C
Responsibilities:
* Collaborates with architecture and technical leadership to define the vision and solutions
* Collaborate alongside other engineers of various disciplines to take the design and create executable pieces of work
* Participates in all phases of development
* Establishes and champions First Command data and data engineering standards/best practices
* Communicate and work alongside members of their team in support of their day-to-day work items
* Works with business partners to ensure alignment between the ask and the output
* Participate and lead peer reviews and champion peer review best practices and culture
* Key player and leader in an Agile environment, participating in daily huddles, sprint planning, retrospectives, etc.
* Mentors junior team members in best practices and standards
* Serve as escalation point for other team members on technical issues
* Leads effort to create and document deployment and release plans
* Works with architects to evaluate new technologies and patterns that will inform the technology roadmap
* Leads Communities of Practices or other cross functional training opportunities
* Leads troubleshooting processes to determine root cause analysis
REQUIREMENTS:
* Bachelor's degree required; MBA or MS or equivalent a plus
* 7+ years of applied experience in data integration, ETL, and data management or comparable positions that handle large/complex data sets, developing automation, and fostering business partner relationships
* Expert in one or more of the following ETL tools such as Azure Data Factory, Informatica, Matillion, Fivetran and DBT
* Experience working with a diverse set of data sources such as Flat File, Database, API, Event Streaming
* Expert in SQL with knowledge of T-SQL
* Strong experience in data modeling, data warehousing, and MDM solutions
* Familiar with Azure Synapse or Snowflake
* Familiar with Databricks Delta Lake
* Familiar with a scripting language such as python, powershell, or bash
* Familiar with data lake design patterns
* Excellent written communication and presentation skills
* Proficient in understanding of data mapping and lineage strategies
* Proficient in understanding in conceptual, logical, and physical data design
* Proficient in understanding of data management practices, data architecture principles, and data governance process
Preferred Qualifications:
* Expert in Azure Data Factory, Data Bricks, and Python
* Strong dimensional modelling skills
* Familiarity with DevOps principles and processes
* Applied experience in Agile, SAFe, or Scrum
* Financial services industry experience or other highly regulated industry experience a plus
* Familiarity with data science and analytics tools such as Alteryx, SPSS, SAS, Tableau, PowerBIResponsibilities:
* Collaborates with architecture and technical leadership to define the vision and solutions
* Collaborate alongside other engineers of various disciplines to take the design and create executable pieces of work
* Participates in all phases of development
* Establishes and champions First Command data and data engineering standards/best practices
* Communicate and work alongside members of their team in support of their day-to-day work items
* Works with business partners to ensure alignment between the ask and the output
* Participate and lead peer reviews and champion peer review best practices and culture
* Key player and leader in an Agile environment, participating in daily huddles, sprint planning, retrospectives, etc.
* Mentors junior team members in best practices and standards
* Serve as escalation point for other team members on technical issues
* Leads effort to create and document deployment and release plans
* Works with architects to evaluate new technologies and patterns that will inform the technology roadmap
* Leads Communities of Practices or other cross functional training opportunities
* Leads troubleshooting processes to determine root cause analysis
REQUIREMENTS:
* Bachelor's degree required; MBA or MS or equivalent a plus
* 7+ years of applied experience in data integration, ETL, and data management or comparable positions that handle large/complex data sets, developing automation, and fostering business partner relationships
* Expert in one or more of the following ETL tools such as Azure Data Factory, Informatica, Matillion, Fivetran and DBT
* Experience working with a diverse set of data sources such as Flat File, Database, API, Event Streaming
* Expert in SQL with knowledge of T-SQL
* Strong experience in data modeling, data warehousing, and MDM solutions
* Familiar with Azure Synapse or Snowflake
* Familiar with Databricks Delta Lake
* Familiar with a scripting language such as python, powershell, or bash
* Familiar with data lake design patterns
* Excellent written communication and presentation skills
* Proficient in understanding of data mapping and lineage strategies
* Proficient in understanding in conceptual, logical, and physical data design
* Proficient in understanding of data management practices, data architecture principles, and data governance process
Preferred Qualifications:
* Expert in Azure Data Factory, Data Bricks, and Python
* Strong dimensional modelling skills
* Familiarity with DevOps principles and processes
* Applied experience in Agile, SAFe, or Scrum
* Financial services industry experience or other highly regulated industry experience a plus
* Familiarity with data science and analytics tools such as Alteryx, SPSS, SAS, Tableau, PowerBI
Job Type: Contract
Schedule:
8 hour shift
Ability to commute/relocate:
Fort Worth, TX 76102: Reliably commute or planning to relocate before starting work (Required)
Application Question(s):
Willing to work W2
Experience:
Informatica: 1 year (Preferred)
SQL: 1 year (Preferred)
Data warehouse: 1 year (Preferred)
Work Location: In person
Show Less
Report",3.7,1 to 50 Employees,-1,Company - Private,-1,-1,Unknown / Non-Applicable
Data Engineer,$65K - $93K (Glassdoor est.),Intralox4.3 ★,"New Orleans, LA","Intralox, L.L.C., a division of Laitram, L.L.C., and a global provider of conveyance solutions and services, has an opening for a Data Engineer/Power BI Developer within the Digital Solutions team. The Digital Solution (DS) team is leading Intralox’s implementation and evolution of our enterprise business applications and new digital solutions. DS is more than just an IT department – we are business domain experts, software developers, and operational support resources that use technology to bring more value to customers and Intralox.
Intralox is a division of Laitram, L.L.C., with an extensive portfolio of innovative conveyance solutions and services that improve lives and optimize businesses worldwide.
Our global workforce of over 3,000 employees in 20+ countries consist of reliable problem solvers, continuously developing and directly delivering solutions that have driven our customers’ growth worldwide for more than 50 years.
Intralox was founded on the principle of doing the right thing, by treating customers, employees, and suppliers with honesty, fairness, and respect. We invest heavily in these values and aim to practice our business philosophy principles every day, which is why we have been consistently recognized for innovation and workplace excellence. We believe in the power of a good idea no matter where it comes from, using trust as the foundation to how we work, and that self-managed people are our greatest asset.
Responsibilities:
Create and maintain optimal data pipeline architecture,
Assemble large, complex data sets that meet business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Microsoft ‘big data’ technologies.
Build analytics tools in Power BI that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Create and support data tools for analytics and business team members that assist them in building and optimizing our products and business processes.
Work with data and analytics experts to strive for greater functionality in our data systems.
Requirements:
Minimum 2 years of experience in a Data Engineer role.
Bachelor's degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. A graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field is preferred.
Advanced working knowledge of SQL and experience with relational databases, including query authoring (SQL). Additionally, familiarity with a variety of databases is desirable.
Proven experience in building and optimizing 'big data' data pipelines, architectures, and data sets.
Strong analytical skills for working with structured and unstructured datasets.
Proficiency in developing processes to support data transformation, data structures, metadata, dependency, and workload management.
Demonstrated success in manipulating, processing, and extracting value from large, disparate datasets.
Excellent project management and organizational skills.
Experience with or knowledge of the following software/tools or similar is required:
Big data tools: Hadoop, Spark, Kafka, etc.
Relational SQL and NoSQL databases.
Data pipeline and workflow management tools: Azure Data Factory.
Azure cloud services: VMs, Synapse Analytics, Data Factory, Azure SQL, Data Lake.
Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Knowledge of Oracle EBS preferred.
EOE/M/F/Vet/Disabled
Show Less
Report",4.3,1001 to 5000 Employees,1971,Company - Private,Machinery Manufacturing,Manufacturing,$500 million to $1 billion (USD)
Data Engineer - Deep Learning,$125K - $150K (Employer est.),Flawless AI4.6 ★,"Santa Monica, CA","Flawless AI is an energetic, growing startup operating at the intersection of AI and film making.
Our technology uses generative AI to make it look like your favorite actor or actress is speaking another language, without subtitles or voice-overs, making it indistinguishable from the original performance.
It doesn’t matter where you live or what languages you speak, audiences can now experience authentic storytelling exactly as the filmmaker originally intended through the magic of automated visual translation.
We believe the most important breakthroughs in AI are unlocked only when we apply it to real-world use cases that reach millions of people and are generally available. This is our north star and guiding principle.
Ethical, licensed, and balanced data is central to our AI research. The data team is responsible for sourcing, annotating, curating, and deploying large multi-modal datasets within the film media domain. The data team works with core and applied ML, lighting, staging, engineering, and film innovation teams to understand data requirements and deliver high-quality datasets that power next-generation AI models. The team is also responsible for data versioning, data DevOps, and persistent storage.
Our work in automated visual translation is just the beginning, we’re developing countless exciting products based on the application of our proprietary, cornerstone research.
This is an unbelievable opportunity to join a team operating at the cutting edge of the generative revolution, don't hesitate, reach out today.
Qualifications
Minimum Requirements
Bachelor's degree in computer science, machine learning, computer vision, or a related field
2+ years of experience preparing large datasets for deep learning and neural network models
2+ years of experience writing and testing modularized, production-level Python code
Experience with deep learning frameworks such as PyTorch, Tensorflow, Keras, or MXNET
Experience building scalable ML pipelines for image and video modalities with tools such as Flyte, Prefect, AirFlow, or Kubeflow
Experience with data collection, labeling, cleaning, and generation with tools such as LabelBox, SuperAnnotate, Scale Ai, or V7
Preferred Requirements
MS in computer science, machine learning, computer vision, or a related field
Experience with CI / CD automation using tools such as GitHub Actions or GitLab
Experience setting up and configuring cloud infrastructure resources with tools such as Terraform or CloudFormation
Experience with various cloud data storage technologies on AWS, GCP, or Azure
Experience writing production-grade C++
Experience with audio, text, or 3D data modalities
Benefits
Autonomy - You'll own your work from start to finish
Influence - You'll impact major research decisions
Publication - You’ll be encouraged to publish your work
Learning - You’ll push the state-of-the-art with the best in the world
Impact - Your input genuinely matters
Hybrid office model
Stock Options
Comprehensive medical, dental, and vision insurance
401(k) plan
Your choice of equipment
Flawless is proud to emphasize an equal opportunity, safe environment for people to do their best work. We are committed to providing equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements.
Show Less
Report",4.6,51 to 200 Employees,2018,Company - Private,Computer Hardware Development,Information Technology,$5 to $25 million (USD)
Test Data Engineer,$50.00 - $53.00 Per Hour (Employer est.),TekwissenGroup,"Raleigh, NC",-1,4.6,-1,-1,-1,-1,-1,-1
Cloud Data Protection Engineer,$70.00 - $78.00 Per Hour (Employer est.),Cloud Destinations4.8 ★,Remote,"My name is Muralidharan and I’m working with Cloud Destinations LLC as a Technical Recruiter.
Below are the job details as needed by the hiring manager. Kindly advise if you might be interested and qualified for the below opportunity and if we can discuss further on this
Position: Cloud Data Protection Engineer
Duration: 12-24+ Months Contract
Location: Remote
They are open to having remote resources (would have to travel occasionally), and would prefer candidates based in Charlotte, NC, Islen, NJ, or Dallas, TX.
Cloud Data Protection Engineer is responsible for designing, engineering and implementing a new, cutting edge, cloud platform security for transforming our business applications into scalable, elastic systems that can be instantiated on demand, on cloud.
The role requires for the Engineer to design, develop, configure, test, debug and document all layers of the Cloud stack to satisfy the new big data system & Security requirements.
This is expected to range from Cloud hosting platform to the design and implementation of higher level services such as the IaaS, PaaS and SaaS layers, big data platform, Authentication/Authorization, Data encryption techniques (field level, db level, application layer encryption) masking tokenization techniques, and other security services.
The focus of this role is on the Security of the product and service for cloud big data platform, with understanding of ETL, data consumption and data migration security as well as data loss prevention techniques.
The ideal candidate should be comfortable being directly involved with the design, development, testing, and operation of the solutions that will be composed into the Cloud Services environment.
They will also provide comprehensive security consultation to business unit, IT management and staff at the highest technical level. Be able to conduct detailed threat analysis and identify mitigating security controls and solutions.
They will work closely with Cloud Services management to identify and specify complex business requirements and processes that drive the platform and application security Patterns and roadmap.
They will research and evaluate alternative solutions and make recommendations for changes that would enhance the security of the platform
Job Type: Contract
Salary: $70.00 - $78.00 per hour
Schedule:
8 hour shift
Experience:
Cloud Security: 10 years (Required)
IaaS, PaaS, SaaS: 4 years (Required)
Data Encryption: 5 years (Required)
Data Protection: 4 years (Required)
Data loss prevention: 5 years (Required)
Big data: 1 year (Required)
Work Location: Remote
Speak with the employer
+91 925-887-0055
Show Less
Report",4.8,201 to 500 Employees,2016,Company - Private,Enterprise Software & Network Solutions,Information Technology,$5 to $25 million (USD)
Data Engineer W/ Pega,-1,Sunera Technologies3.8 ★,Remote,"Hi
We have a direct client requirement for Data Engineer W/ Pega @ Miami FL.
Role: Data Engineer W/ Pega
Location: Miami FL
Duration : Long Term
Bachelor’s degree in IT related areas.
•5 or more years of relational databases specifically Oracle, DB1 experience mandatory.
•5 or more years of SQL and PL/SQL query language experience mandatory.
•5 or more years of Unix experience mandatory.
•5 or more years of automating routine tasks via stored procedures mandatory.
•Previous experience with Pega required.
Experience writing and/or understanding SQL statements OR previous Campaign design and execution with a campaign management solution application is mandatory
Job Types: Full-time, Contract
Experience level:
9 years
Schedule:
8 hour shift
Work Location: Remote
Show Less
Report",3.8,1001 to 5000 Employees,2004,Company - Private,Information Technology Support Services,Information Technology,$25 to $100 million (USD)
Data Engineer,$74K - $105K (Glassdoor est.),Gorbel3.6 ★,"Victor, NY","Gorbel’s mission is simple: We improve people’s lives.
That mission guides everything we do, from the products and service we provide to our outside customers to the work environment we foster for our employees. We are a manufacturer of material handling and fall protection products for the production and warehouse/distribution sectors. We’re on the cutting edge of manufacturing and distribution; a thriving, growing company that is constantly seeking out new ways to innovate and elevate our products and our processes – and we’re looking for people like you to join us in that mission.
We’re currently hiring for open positions in the US and Canada. We operate in Canada as Engineered Lifting Systems and Equipment (ELS)/DBA Gorbel® Canada, and subsequent communication related to Canadian positions may show the ELS name. You may be contacted by phone by recruitment personnel based in either Canada or New York.
Work Shift:
Job Description:
The Data Engineer is responsible for working with interdepartmental stakeholders and the data scientist to transform business requirements into effective high-quality professional visualization for consuming analytics. This person must be a self-starter who is comfortable with ambiguity and has strong attention to detail. The Data Engineer performs the transformation, filtering, and aggregation of raw data into concise, accurate, and focused data models by using internal software capabilities to acquire, ingest, and transform big datasets. This position is also responsible for collaborating with cross-functional teams for generating insights and presenting findings to senior management or using data visualization and presentation programs to suggest business improvements. Also, supporting ad-hoc analyses and reports needed for business decisions, planning, and execution. The Data Engineer also implements scalable data services using serverless Azure resources such as Data Factory, Synapse, Databricks, Azure Functions, and traditional SQL. The Data Engineer is responsible for new database design, performance tuning, and advanced administration both On-premise and cloud.
RESPONSIBILITIES:
Create and maintain optimal data pipeline architecture with Azure Data Factory and SSIS.
Build Data Storage Solutions with SQL Servers, Azure SQL DB, and Data Lakes.
Translate reporting and business needs into a scalable and manageable data solution
Engage with data source platform leads to gain a tactical and strategic understanding of data sources required by Data Services AI/ML.
Ensure data extraction, transformation, and loading data meet data security & compliance requirements.
Engage with Information Technology and Software Engineering for database design, performance tuning, and advanced administration. Both On-premise and in the cloud.
Create, maintain, and store documentation that describes the ETL solutions process for future reference
Keeps a working knowledge of new technologies that can be leveraged to drive improvement in our data management processes
Assist with the development and implementation of best practices around data management to ensure the accuracy, validity, reusability, and consistent definitions for common reference data.
REQUIRED QUALIFICATIONS:
Bachelor’s degree or combination of relevant experience in Computer Science, Information Systems, or other related field.
3-5 Years experience with coding and application development experience with multiple programming languages such as Python, R, SQL, or similar scripting languages.
3-5 Years Hands-on experience with cloud orchestration, automation tools, and CI/CD pipeline creation using Azure DevOps.
Strong Understanding of data modeling, data warehousing, data lakes, and big-data concepts.
Proficient in using visualization technology, proficiency in DAX, and working knowledge of Power Query to produce large-scale visual analytics implementations, performance tuning, and optimization
Ability to meet tight deadlines with high quality requirements
WORK ENVIRONMENT:
ADA Physical/Mental/Workplace Requirements
Occasional lifting up to 25 lbs.
Sitting, working at desk/personal computer for extended periods of time
Primary work environment is professional corporate
Gorbel® is an Equal Opportunity Employer that does not discriminate on the basis of actual or perceived race, creed, color, religion, alienage or national origin, ancestry, citizenship status, age, disability or handicap, gender, marital status, veteran status, sexual orientation, genetic information, arrest record, or any other characteristic protected by applicable federal, state or local laws. Gorbel® is also committed to providing reasonable accommodations to qualified individuals so that an individual can perform their job related duties. If you are interested in applying for an employment opportunity and require special assistance or an accommodation to apply due to a disability, please contact us at 585-924-6204.
Show Less
Report",3.6,201 to 500 Employees,1977,Company - Private,Machinery Manufacturing,Manufacturing,$25 to $100 million (USD)
Data Engineer,$70.00 Per Hour (Employer est.),Tellus solutions3.7 ★,"Sunnyvale, CA","Job Description:
The role will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams.
The ideal candidate is an experienced data pipeline builder and data wrangler who enjoy optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, and data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
The right candidate will be excited by the prospect of optimizing or even re-designing our data architecture to support our next generation of products and data initiatives.
Responsibilities:
Create and maintain optimal data pipeline architecture for data intensive applications.
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Azure SQL, Cosmo DB, Databricks and other legacy databases.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep our data separated and secure across national boundaries through multiple data centers and Azure regions.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Extensive Experience on Databricks on Azure Cloud platform, deep understanding on Delta lake, Lake House Architecture.
Programming experience on Python, Shell scripting, PySpark, and other data programming language.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with Data Visualization Dashboard, Metrics and etc.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
Skills:
A successful history of manipulating, processing and extracting value from large disconnected datasets.
Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.
Familiar with Deployment tool like Docker and building CI/CD pipelines.
Experience supporting and working with cross-functional teams in a dynamic environment.
8+ years' experience in software development, Data engineering, and
Bachelor's degree in computer science, Statistics, Informatics, Information Systems or another quantitative field. Postgraduate/master's degree is preferred.
Experience in Machine Learning and Data Modeling is a plus.
Job Type: Contract
Salary: Up to $70.00 per hour
Benefits:
401(k)
Dental insurance
Health insurance
Schedule:
8 hour shift
Day shift
Application Question(s):
Only US Citizen and Green Card Holder
Experience:
Python, Shell scripting, PySpark: 5 years (Required)
Azure SQL: 5 years (Required)
Work Location: On the road
Show Less
Report",3.7,51 to 200 Employees,2006,Company - Private,Information Technology Support Services,Information Technology,$5 to $25 million (USD)
Azure Data Engineer,$45.00 - $50.00 Per Hour (Employer est.),AppsIntegration Inc2.9 ★,Remote,"Position: Sr. Azure Data Engineer
Duration: 12+months
Location: 100% Remote
TOP 3:
Azure Data Engineer:
Must have ADF, Azure SQL, Spark
1) Support and refine Constellation’s data and analytics technology stack with an emphasis on improving reliability, scale, and availability. ADF, Spark
2) Assist in the design and management of enterprise grade data pipelines and data stores that will used for developing sophisticated analytics programs, machine learning models, and statistical methods.
3) Experience delivering data solutions via Agile methodologies and designing CI/CD workflows.
PRIMARY DUTIES AND ACCOUNTABILITIES
Item Accountability %
1 Create and maintain optimal data pipeline architecture 20
2 Assemble large, complex data sets that meet functional / non-functional business requirements. 20
3 Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. 20
4 Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Big Data technologies. 20
5 Deliver automation & lean processes to ensure high quality throughput & performance of the entire data & analytics platform.? 10
6 Work with data and analytics experts to strive for greater functionality in our analytics platforms. 10
POSITION SPECIFICATIONS
Minimum: Preferred:
Experience in building/operating/maintaining fault tolerant and scalable data processing integrations using Azure
Strong problem-solving skills with emphasis on optimization data pipelines
Excellent written and verbal communication skills for coordinating across teams
A drive to learn and master new technologies and techniques
Experienced in DevOps and Agile • Experience using Docker or Kubernetes is a plus
Demonstrated capabilities with cloud infrastructures and multi-cloud environments such as Azure, AWS, IBM cloud
environments and using CI/CD pipelines.
Experienced using Databricks & Apache Spark
Experienced using Azure Data Factory or Synapse Analytics
Job Type: Contract
Salary: $45.00 - $50.00 per hour
Compensation package:
Weekly pay
Experience level:
8 years
Schedule:
8 hour shift
Experience:
Informatica: 1 year (Preferred)
SQL: 1 year (Preferred)
Data warehouse: 1 year (Preferred)
Work Location: Remote
Show Less
Report",2.9,1 to 50 Employees,-1,Company - Private,Computer Hardware Development,Information Technology,$1 to $5 million (USD)
Data Engineer,$55K - $65K (Employer est.),NYC Office Of The Mayor4.0 ★,"New York, NY","Mayor’s Office of Immigrant Affairs
Position Title: Data Engineer
The Agency You’ll Join:
The NYC Mayor's Office administers all city services, public property, most public agencies, and enforces
all city, state, and federal laws within New York City. New York City’s Mayor, Eric Adams is head of the
executive branch of New York City's government. Mayor Adams has served the people of New York City
as an NYPD officer, State Senator, and Brooklyn Borough President. The Adams’ Administration is
leading the fight to make New York City’s economy stronger, reduce inequality, improve public safety,
and build a stronger, healthier city that delivers for all New Yorkers. As an agency, we value fairness,
helpfulness, transparency, leadership and build our teams around these values. For current job
opportunities visit our careers page
The Team You’ll Work With:
The Mayor’s Office of Immigrant Affairs (MOIA) was established pursuant to the New York City Charter to
promote the well-being of immigrant communities. To achieve this, MOIA serves as a bridge between city
government and immigrant communities; advises and assists in developing and implementing policies
designed to assist immigrants and speakers of languages; and supports and enhances the ability of city
agencies and offices to serve these immigrant populations. MOIA’s work cuts across a broad range of
issues citywide and MOIA works closely with cities around the country and the world to promote
innovations in immigrant access to services. To learn more, visit nyc.dov/site/immigrants/index.page
The Problems You’ll Solve
The Mayor’s Office of Immigrant Affairs is seeking a data engineer to be the tech lead on key projects,
including research, feasibility assessments, requirements gathering, solution development,
implementation planning, documentation creation, and roll-out of systems. Involves developing creative
solutions for challenges including disparate data, limited systems/resources, and understanding the City’s
policies for technology usage. Data analysis will focus on organizational performance more than
predictive analysis.
This role will act as the technical lead in the design and execution of systems implementations, will a ct
as lead for recommendations on technology solutions and approaches, and will provide data analysis
solutions for the office.
Responsibilities include, but not limited to:
Collaborate and coordinate with all MOIA teams to understand business requirements and
develop business solutions to meet these requirements.
Manage the planning, execution, and on-time delivery of assessments related to technology
solutions and approaches to data collection and visualization
Mayor’s Office of Immigrant Affairs
Make recommendation on technology platforms and solutions.
Monitor and track progress of solutions development.
Provide technical assistance and training to staff in the implementation of solutions.
Generate various project-related documents including technical guides, user guides, installation
guides, templates, and technical and maintenance documentation.
Initiate, facilitate, and participate in on-going meetings with project team members to ensure that
solutions meet business needs and continue to iterate on enhancements.
Provide first level support to MOIA staff for troubleshooting and addressing questions related to
use of implemented solutions.
Work with external agencies such as DoITT, the Department of Information Technology and
Telecommunication, and the Mayor’s Office MIS department, to communicate needs, track
progress of trouble tickets and enhancement request, and arrange priorities according to
resources.
Provide support in technology-related issues to the Director of Operations.
Resolve and/or escalate issues in a timely manner.
About You
High level of technical skills, including but not limited to advanced Excel and Power Query
High level of comfort with data-driven analysis, and skills necessary to present data and trends in
a useful manner
2 years of experience in advanced statistical analysis and data visualization using Power BI
2 years of experience in CRM systems (Microsoft Dynamics highly desirable).
3 years of experience in SQL creation and data manipulation
2 years of experience in HTML and JQuery
Experience with integration of data from multiple data sources
Experience with improving data reliability, efficiency and quality and ability to make
recommendations
Strong intuition for analytical methodologies and desire to solve novel technical challenges
Experience with BI frameworks and/or organization performance management analysis
Proficiency in designing efficient and robust ETL workflows
Excellent critical thinking and problem-solving skills with the ability to set priorities and hold staff
accountable on other teams for outcomes
Excellent communications skills, both written and verbal; training experience highly desired
Excellent organizational skills
Highly professional demeanor
Ability to work independently
Mayor’s Office of Immigrant Affairs
Patience, and the ability to navigate and work within a system in which many agency players
have input into technology decisions.
Ability to develop solutions that are not always obvious
A track record of effectively handling multiple priorities
Proven ability to work in a fast-paced environment and meet deadlines, and work productively
under pressure, both as an individual and as part of a team
Salary
The City of New York Office of the Mayor’s compensation package includes a market competitive salary,
equity for all full-time roles and exceptional benefits. Our cash compensation range for this role is
$ 55,000 – $ 65,000.
Final offers may vary from the amount listed based on candidate experience and expertise, and other
factors.
Equal Opportunity | Diversity Equity & Inclusion Statement
The Office of the Mayor is an is an inclusive equal opportunity employer committed to recruiting and
retaining a diverse workforce and providing a work environment that is free from discrimination and
harassment based upon any legally protected status or protected characteristic, including but not
limited to an individual's sex, race, color, ethnicity, national origin, age, religion, disability, sexual
orientation, veteran status, gender identity, or pregnancy.
The Adams Administration values diversity — in backgrounds and in experiences that is reflective of the
city it serves. Applicants of all backgrounds are strongly encouraged and welcome to apply.
If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable
accommodation if you are unable or limited in your ability to access job openings or apply for a job on
this site as a result of your disability. You can request reasonable accommodations by EEO at
EEO@cityhall.nyc.gov.
New York City Residency Is Required Within 90 Days of Appointment
Show Less
Report",4.0,Unknown,-1,Government,Municipal Agencies,Government & Public Administration,Unknown / Non-Applicable
Data Engineer,$45.00 - $50.00 Per Hour (Employer est.),Business Integra Inc.3.8 ★,Remote,"Remote Data Engineer / Architect
Location: Remote Work
Experience level: 7+ years
Required skills:
- Shell Scripting
- SQL, Python, Certificate Management
- Delphix, Genrocket, TDM
Description:
The Data Architect works in all data environments which includes data design, database architecture, metadata and repository creation. The Data Architect work assignments are varied and frequently require interpretation and independent determination of the appropriate courses of action.
for developing blueprints for all data repositories, evaluating hardware and software platforms, and integrating systems. Translates business needs into long-term data architecture solutions. Defines, designs and builds dimensional database schemas. Evaluates reusability of current data for separate analyses. Conducts data sheering to rid the system of old, unused or duplicate data. Reviews object and data models and the metadata repository to structure the data for better management and quicker access. Understands department, segment, and organizational strategy and operating objectives, including their linkages to related areas. Makes decisions regarding own work methods, occasionally in ambiguous situations, and requires minimal direction and receives guidance where needed. Follows established guidelines/procedures.
Required Qualifications
Bachelor's degree in Computer Science, Information Technology or related field
Less than 5 years of technical experience
Operational Data Integration for real-time APIs
Big Data Integration & Analytics
Must be passionate about contributing to an organization focused on continuously improving consumer experiences
Preferred Qualifications
Master's Degree
Job Type: Contract
Salary: $45.00 - $50.00 per hour
Experience level:
7 years
Schedule:
8 hour shift
Experience:
delphix: 4 years (Preferred)
genrocket: 4 years (Preferred)
Shell Scripting: 5 years (Preferred)
SQL: 6 years (Preferred)
test data management: 5 years (Preferred)
Work Location: Remote
Show Less
Report",3.8,201 to 500 Employees,2001,Company - Private,Information Technology Support Services,Information Technology,$25 to $100 million (USD)
Data Engineer,$141K - $155K (Employer est.),Sephora3.6 ★,"San Francisco, CA","Job ID: 227599
Location Name: CA-FSC SF Off (0174)
Address: 525 Market St, 4th Floor, San Francisco, CA 94105, United States (US)
Job Type: Full Time
Position Type: Regular
Job Function: Information Technology
Company Overview:
At Sephora we inspire our customers, empower our teams, and help them become the best versions of themselves. We create an environment where people are valued, and differences are celebrated. Every day, our teams across the world bring to life our purpose: to expand the way the world sees beauty by empowering the Extra Ordinary in each of us. We are united by a common goal - to reimagine the future of beauty.
The Opportunity:
Your role at Sephora:
As a Data Engineer at Sephora, you will: Gather requirements using interviews, document analysis, requirements workshops, business process descriptions, business analysis and workflow analysis. Perform data migration from Enterprise data warehouse to new data platform. Create source to Stage and Stage to Target data mapping documents indicating the source tables, columns, data types, transformation required and business rules to be applied. Collect and analyze business data to develop solutions. Write SQL queries to analyze large datasets and resolve business engineering needs. Analyze and report on computer systems application implementation using Cognos, Tableau, SQL, and Big Data (Hive). Work with Tableau testing on mobile and web platforms. Define aggregation logic for new reporting requests. Analyze new source ingestion required for migration. Develop production support architecture, design, configuration, customization, integration and user acceptance testing. Work on multiple computer systems engineering projects using Software Development Life Cycle (SDLC) and Agile development methodologies. (Position allows some work-from- home flexibility, with schedule to be approved by manager. Must be able to work on site as required).
We are excited about you if you have:
Bachelor’s or foreign equivalent degree in Computer Science, Engineering or Information Systems.
Three (3) years of experience in data engineering and business analytics.
Experience must include:
Azure Databricks
Agile
SDLC
Microsoft SQL server
Cognos, SSAS Cube
Kafka
Tableau
Data analysis and mapping, data profiling, functional and data testing.
Salary: $141,000 to $155,000 per year depending on experience
Please visit our career website for additional information about our benefits package
While at Sephora, you’ll enjoy…
The people. You will be surrounded by some of the most talented leaders and teams – people you can be proud to work with.
The learning. We invest in training and developing our teams, and you will continue evolving and building your skills through personalized career plans.
The culture. As a leading beauty retailer within the LVMH family, our reach is broad, and our impact is global. It is in our DNA to innovate and, at Sephora, all 40,000 passionate team members across 35 markets and 3,000+ stores, are united by a common goal - to reimagine the future of beauty.
You can unleash your creativity, because we’ve got disruptive spirit. You can learn and evolve, because we empower you to be your best. You can be yourself, because you are what sets us apart. This, is the future of beauty. Reimagine your future, at Sephora.
Sephora is an equal opportunity employer and values diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, ancestry, citizenship, gender, gender identity, sexual orientation, age, marital status, military/veteran status, or disability status. Sephora is committed to working with and providing reasonable accommodation to applicants with physical and mental disabilities.
Sephora will consider for employment all qualified applicants with criminal histories in a manner consistent with applicable law.
To apply to this job, click Apply Now
Show Less
Report",3.6,10000+ Employees,1969,Company - Private,Beauty & Personal Accessories Stores,Retail & Wholesale,$1 to $5 billion (USD)
Data Engineer,$90K - $119K (Employer est.),Rockstar Games4.2 ★,"New York, NY","At Rockstar Games, we create world-class entertainment experiences.
A career at Rockstar Games is about being part of a team working on some of the most creatively rewarding and ambitious projects to be found in any entertainment medium. You would be welcomed to a dedicated and inclusive environment where you can learn, and collaborate with some of the most talented people in the industry.
Rockstar is seeking a Data Engineer to join a team focused on building a cutting-edge game analytics platform and tools to better understand our players and enhance their experience in our games. This is a full-time permanent position based out of Rockstar's unique game development studio in New York, NY.
The ideal candidate will be skilled in developing complex ingestion and transformation processes with an emphasis on reliability and performance. In collaboration with other data engineers, machine learning engineers, and software engineers, the candidate will empower the team of analysts and data scientists to deliver data driven insights and applications to company stakeholders.
WHAT WE DO
The Rockstar Analytics team provide insights and actionable results to a wide variety of stakeholders across the organization in support of their decision making.
We are currently adding team members to multiple verticals including; Machine Learning and Game Data Pipeline.
RESPONSIBILITIES
Implement and maintain real-time and batch Data Models.
Deliver real-time and non-real-time data models to analysts and data scientists who create insights and analytics applications for our stakeholders.
Implement and support streaming technologies such as Kafka, Spark, Cassandra & AzureML.
Assist in the development of deployment automation and operational support strategies.
Assist in the development of a big data platform in Hadoop using pipeline technologies such as Spark, Airflow, and more to support a variety of requirements and applications.
Set the standards for warehouse and schema design in massively parallel processing engines such as Hadoop and Snowflake while collaborating with analysts and data scientist in the creation of efficient data models.
Maintain and extend our CI/CD processes and documentation.
QUALIFICATIONS
3+ years of work experience with data modeling, business intelligence and machine learning on big data architectures.
2+ years of experience with the Hadoop ecosystem (HDFS, Spark, Oozie, Impala, etc.) and big data ecosystems (Kafka, Cassandra, etc.).
2+ years of experience with the Azure ecosystem (Azure ML, Azure Data Factory)
Expert in at least one SQL language such as T-SQL or PL/SQL.
Experience developing and managing data warehouses on a terabyte or petabyte scale.
Strong experience in massively parallel processing & columnar databases.
Experience building Real-Time and/or Near-Real-Time ML pipelines.
Experience with Python, Scala, or Java.
Experience with shell scripting.
Experience working in a Linux environment.
SKILLS
Deep understanding of advanced data warehousing concepts and track record of applying these concepts on the job.
Ability to manage numerous projects concurrently and strategically, prioritizing when necessary.
Good communication skills.
Dynamic team player.
A passion for technology.
PLUSES
Please note that these are desirable skills and are not required to apply for the position.
Experience with Python based libraries such as Scikit-Learn
Experience with Databricks
Experience with Spark-ML, Jupyter Notebook, AzureML.
Experience in Lambda architecture.
Experience with CI/CD.
Familiar with Restful APIs.
Experience with Artifact Repositories.
Knowledge of the video game industry.
HOW TO APPLY
Please apply with a resume and cover letter demonstrating how you meet the skills above. If we would like to move forward with your application, a Rockstar recruiter will reach out to you to explain next steps and guide you through the process.
Rockstar is proud to be an equal opportunity employer, and we are committed to hiring, promoting, and compensating employees based on their qualifications and demonstrated ability to perform job responsibilities.
If you've got the right skills for the job, we want to hear from you. We encourage applications from all suitable candidates regardless of age, disability, gender identity, sexual orientation, religion, belief, or race.
The pay range for this position in New York City at the start of employment is expected to be between the range below* per year. However, base pay offered is based on market location, and may vary further depending on individualized factors for job candidates, such as job-related knowledge, skills, experience, and other objective business considerations. Subject to those same considerations, the total compensation package for this position may also include other elements, including a bonus and/or equity awards, in addition to a full range of medical, financial, and/or other benefits. Details of participation in these benefit plans will be provided if an employee receives an offer of employment. If hired, employee will be in an ""at-will position"" and the company reserves the right to modify base salary (as well as any other discretionary payment or compensation or benefit program) at any time, including for reasons related to individual performance, company or individual department/team performance, and market factors.
NYC Pay Range
$89,500—$119,400 USD
Show Less
Report",4.2,1001 to 5000 Employees,1998,Subsidiary or Business Segment,Video Game Publishing,Media & Communication,$5 to $25 million (USD)
Looking for Data Engineer with EMR and/or EHR with 10+ years(Remote),$60.00 - $70.00 Per Hour (Employer est.),Lethyagroupinc,Remote,"Role: Data Engineer with EMR and/or EHR
Client: Confidential
Duration: Long Term
Location: Remote
Job Description
10+ years in IT with at least 3+ years’ experience in data warehousing, modelling, end-to-end BI solutions
Strong SQL, Spark, and PySpark programming skills for data analysis.
Experience with hospital/provider solutions like (EMH, EMR codes, Billing, etc.):
Strong understanding of Data Engineering Solutions, Data modelling, and Software Engineering principles and best practices
Experience in developing data platforms/ Big data and cloud technologies (e.g., Azure)
Advanced knowledge of SQL and query optimization techniques and approaches
Experience designing, developing, and supporting Power BI data sources and reports.
Able to work as a team member and willing to work independently when required.
Strong troubleshooting and problem-solving skills
Experience working in an Agile/SCRUM SDLC environment.
Problem-solving aptitude, with a willingness to work in a fast-paced product development environment and hands-on mentality to do whatever it takes to deliver a successful product.
Experience Skill Matrix:
Data Engineer: Years
Experience with hospital/provider solutions: Years
Must - EMR (electronic medical record) and/or EHR (electronic health record): Years
Data warehousing, modelling, end-to-end BI solutions: Years
SQL, Spark, and PySpark programming: Years
Azure: Years
SQL and query optimization: Years
Designing, developing, and supporting Power BI data sources and reports: Years
Agile/SCRUM SDLC environment: Years
Job Type: Contract
Salary: $60.00 - $70.00 per hour
Benefits:
Health insurance
Experience level:
10 years
Schedule:
8 hour shift
Experience:
Azure (Preferred)
SQL and query optimization (Preferred)
Designing, developing, and supporting Power BI (Preferred)
Agile/SCRUM SDLC environment (Preferred)
Data Engineer - 10 Years (Preferred)
hospital/provider solutions (Preferred)
EMR / EHR (Preferred)
Data warehousing, modelling, end-to-end BI solutions (Preferred)
SQL, Spark, and PySpark programming (Preferred)
Work Location: Remote
Show Less
Report",-1,Unknown,-1,Company - Public,-1,-1,Unknown / Non-Applicable
Data Visualization Engineer,$66K - $90K (Glassdoor est.),GE Gas Power3.9 ★,"Greenville, SC","Job Description Summary
The individual will be part of a team tasked with unlocking value from the data lake both directly and in partnership with other functional and IT organizations. The candidate should have related expertise with report design, report tuning and performance optimization, data analysis, and data manipulation as well as a fundamental understanding of end-to-end business processes and BI systems.
Job Description
In this role, you will:
Work Closely with teams and stakeholders to Increase shop adoption/utilization of existing GSC products/applications using Tableau & other analytics to identify and monitor gaps in both functionality and operational rigor
Help create shop operations standard work through analytics, data driven decision making, and data visibility. Work closely with teams and individuals to design and create dashboards/reports to ensure accurate data. Provide test support
Influence GSC product capabilities & functionality by feeding requirements/enhancements to close gaps identified through analytics; identify opportunities to build best practice/critical analytics into core GSC products
Coordinate with regional & global deployment team to identify analytics needs to support deployment roadmap – including gaps & existing capabilities
Partner with key functional & DT team members to improve overall adoption of existing analytics/dashboards through best practice sharing, training, and enhancements. Strong interpersonal skills, with ability to professionally interact with a diverse blend of personalities to reach resolution and maintain strong relationships.
Assist and manage customers through the report requirements and design process
Ability to query data both in relational database and big data platforms, can troubleshoot data issues and provide corrective measures
Ability to work with other technical teams across multiple initiatives to provide timely delivery in an enterprise environment
Education Qualification
Bachelor's Degree in Computer Engineering, Computer Science, Information Systems, Information Technology, “STEM” Majors (Science, Technology, Engineering, Math),or similar with a minimum of 0-2 years of experience
Desired Characteristics
Prior experience with Tableau &/or Business Objects, Spotfire
Expert understanding of multiple modern BI software & visualization tools
Experience developing analytic solutions on top of MPP databases such as Greenplum, Teradata
Prior experience working with logical or semantic models as the basis for report development
Communicate complex issues effectively to internal staff and customers
Strong attention to detail
Work effectively with project managers and client stakeholders to understand and refine project requirements and deadlines
Desire to be part of a leading edge program with tight deadlines, global enterprise scope and high expectations, ability to learn new technology, processes, organizations, data sets and use cases fast and create solutions immediately
This Job Description is intended to provide a high level guide to the role. However, it is not intended to amend or otherwise restrict/expand the duties required from each individual employee as set out in their respective employment contract and/or as otherwise agreed between an employee and their manager.
Additional Information
GE offers a great work environment, professional development, challenging careers, and competitive compensation. GE is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.
GE will only employ those who are legally authorized to work in the United States for this opening. Any offer of employment is conditioned upon the successful completion of a drug screen (as applicable).
Relocation Assistance Provided: No
Start your job application: click Apply Now
Show Less
Report",3.9,10000+ Employees,1892,Company - Public,Energy & Utilities,"Energy, Mining & Utilities",$10+ billion (USD)
Data Engineer,-1,CareMetx3.4 ★,Remote,"Hey!
Are YOU passionate about applying cutting-edge technology to improve the human experience? Are you passionate about fixing a broken healthcare system that is difficult to navigate, with barriers to access and afford life-changing medicine and treatment? Are you passionate about technical excellence and deploying software that makes people happier and healthier? If so, CareMetx wants you to be a part of our growing engineering team!
At CareMetx, data teams own an outcome - that means teams are both accountable AND empowered for a unit of business value. We create an environment for learning opportunities and believe there is no such thing as “that’s not my job.” Our vision is to generate valuable insights from the data and drive strategic decision-making. All Caremetx data team members have the opportunity to explore new technologies and data trends. That means you will have the opportunity to grow deeper in the skills you’re passionate about and expand your breadth by learning skills that will help the team succeed.
As a Data engineer/analyst/scientist you will constantly deliver business value. You will also function as a catalyst for innovation and new ideas through creative problem-solving, elegant engineering, and the application of new technology and architectural patterns. You will help shape a performance-oriented learning culture by sharing your knowledge and skill depth within the team.
Manage large data sets and model complex problems that impact patient outcomes. Discover insights and identify opportunities using statistical, algorithmic, mining, and visualization techniques.
The core of the role is…
Define and build an industry-standard pipeline and Data Warehouse for a variety of data sources (No SQL, Relational, Text)
Enhance data collection procedures to include information that is relevant for building analytic systems
Model front end and backend data sources to help draw a more comprehensive picture of user flows throughout our system and enable powerful data analysis
Processing, cleansing, and verifying the integrity of data used for analysis.
Performing ad-hoc analysis and presenting results in a clear manner
Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed.
Reformulating existing frameworks to optimize their functioning.
Testing such structures to ensure that they are fit for use.
Preparing raw data for manipulation by data scientists
Relevant experience…
Degree(s) in Engineering, Computer Science, Math, Statistics, Economics, or related fields
5+ years of professional experience either in Big Data, Data Engineering, or Business Intelligence. This might include ETL, data warehousing, or data visualization.
Experience with Talend is preferred
Experience with API is preferred
Understanding of CI/CD, data governance, and data quality framework (great expectations) is preferred
5+ years of hands-on experience applying principles, best practices, and trade-offs of schema design to various types of database systems: relational (Oracle, MSSQL, Postgres, MySQL), NoSQL (HBase, Cassandra, MongoDB), and in-memory (e.g., VoltDB). Understanding data manipulation principles.
Deep understanding of NoSQL databases like MongoDB/Dynamo DB.
Understanding of data flows, data architecture, ETL, Star vs Snowflake schema, and processing of structured and unstructured data
Minimum 3 years of designing and building production data pipelines from ingestion to consumption within a hybrid big data architecture, using Java, Python, Scala, etc.
Show Less
Report",3.4,501 to 1000 Employees,2011,Company - Private,Health Care Services & Hospitals,Healthcare,Unknown / Non-Applicable
Data Engineer,$60.00 - $70.00 Per Hour (Employer est.),zettalogix.Inc,Remote,"Title: Data Engineer
Experience in Retail merchandising analytics is a must,
Duration: 12 months
Location: Remote
Interview Process:
1*_st_ round – Hirevue Video Call*
2*_nd_ round – Hiring Manager*
Skills Required:
Data engineering, analytics, and data modeling experience
Python, Spark, Databricks, Azure, Power BI
Experience in retail merchandising analytics is a must
Job Type: Contract
Pay: $60.00 - $70.00 per hour
Experience level:
8 years
Schedule:
Monday to Friday
Experience:
retail merchandising analytics: 8 years (Required)
Python: 8 years (Required)
Power BI: 8 years (Required)
Work Location: Remote
Show Less
Report",-1,1 to 50 Employees,-1,Company - Private,-1,-1,Unknown / Non-Applicable
Data Engineer,$87K - $129K (Glassdoor est.),Malin USA3.7 ★,"Addison, TX","***NO SPONSORSHIP AT THIS TIME***
Data Engineer Duties and Responsibilities
· Assemble large, complex sets of data that meet non-functional and functional business requirements
· Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
· Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS and SQL technologies
· Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition
· Working with stakeholders including data, design, product and executive teams and assisting them with data-related technical issues
· Working with stakeholders including the Executive, Product, Data and Design teams to support their data infrastructure needs while assisting with data-related technical issues
Skills and Qualifications
· Ability to build and optimize data sets, ‘big data’ data pipelines and architectures
· Ability to understand and build complex data models for Power BI reporting.
· Ability to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions
· Excellent analytic skills associated with working on unstructured datasets
· Ability to build processes that support data transformation, workload management, data structures, dependency and metadata
Must Have experience in (in order of importance)
· 10 + years experience as a Data Engineer or similar role.
· SQL Server
o T-SQL code
o Administrator functions
· Data Factory
o Python
· Power BI
o DAX code
o M code
· Azure
Malin is an Equal Opportunity Employer -- M/F/Veteran/Disability/Sexual Orientation/Gender Identity
Job Type: Full-time
Benefits:
401(k)
401(k) matching
Dental insurance
Employee assistance program
Flexible spending account
Health insurance
Health savings account
Paid time off
Referral program
Vision insurance
Experience level:
10 years
Schedule:
Monday to Friday
Work Location: Hybrid remote in Addison, TX 75001
Show Less
Report",3.7,501 to 1000 Employees,1971,Company - Private,Shipping & Trucking,Transportation & Logistics,$100 to $500 million (USD)
Data Engineer,$86K - $121K (Glassdoor est.),Virtualware Innovations4.8 ★,"Dallas, TX","Job Description
5+ years of experience in IT
Excellent knowledge in SQL and SSIS packages
Should have worked in GCP for more than 6 month
Good to have knowledge in Hadoop/spark ( any of this)
Should have basic knowledge in ETL
Good to have skill set: DB2 & Informix
Skillset Required – GCP, Spark , PySpark and Python, ETL tools , SQL, SSIS
Show Less
Report",4.8,1 to 50 Employees,-1,Company - Private,-1,-1,Unknown / Non-Applicable
Data Engineer,$95.00 - $105.00 Per Hour (Employer est.),Ryzen Solutions2.7 ★,"Cupertino, CA","Data Engineer
Master degree in computer science, Mathematics or related field, or equivalent practical experience
3 to 6 years of experience in creating ETL workflows and automation using Python & database designing techniques
Experience writing code in Python, PySpark and SQL
Libraries Panda NumPy and good with time series
Knowledge in query troubleshooting such as isolating blocks of poor performing code, determining root cause, and developing remediation actions
Design-thinking & excellent verbal and written communication skills
Responsibilities
•Good experience in developing robust Python applications
•Efficient in database queries and data storage best practices
Data structure and Algorithm
Job Type: Contract
Pay: $95.00 - $105.00 per hour
Experience level:
2 years
Schedule:
Monday to Friday
Ability to commute/relocate:
Cupertino, CA: Reliably commute or planning to relocate before starting work (Required)
Experience:
Data structures: 2 years (Required)
Python: 2 years (Required)
Link List: 2 years (Required)
Algorithm design: 2 years (Required)
Work Location: In person
Show Less
Report",2.7,Unknown,-1,Company - Private,-1,-1,Unknown / Non-Applicable
Data Engineer,$50.00 - $75.00 Per Hour (Employer est.),Okaya infocom4.1 ★,"Jersey City, NJ","Experience in Bigdata (Kafka, Elastic search, Logstash, Kibana)
Experience in Aws Managed services MSK, Glue, IAM etc
Experience in Hortonworks Data Platform or Cloudera Distribution stacks
AWS Development experience using these services ( RDS with Postgres SQL Experience, DynamoDb, Data Pipeline, Data Base Migration, AWS Kafka )
Experience in CDC process (Added Advantage if knowledge in Debezium or any CDC tools)
Required Experience in SQL or ORACLE ( Must),Data Modeling Concepts.
Required experience with Data Engineering and ETL/EDW Design Process and Practices.
Experience with GIT / Bit Bucket / SVN configuration for code check-in/Check-out and CI CD knowledge.
Experience with Source to Target mapping and Data Modeling practice.
Experience with Rest API Process ( PUT/GET)
Experience in Agile Methodology using SCRUM or Kanban
Insurance domain experience is a plus.
Handling Client experience will be added advantage
Job Types: Full-time, Contract
Salary: $50.00 - $75.00 per hour
Experience level:
10 years
11+ years
9 years
Schedule:
8 hour shift
Application Question(s):
What is your Work Authorization?
What's the RATE you are looking for?
Are you willing to relocate?
Experience:
Informatica: 7 years (Required)
SQL: 9 years (Required)
Data warehouse: 8 years (Required)
Work Location: On the road
Show Less
Report",4.1,Unknown,2006,Self-employed,Information Technology Support Services,Information Technology,Unknown / Non-Applicable
Data Engineer,$88K - $118K (Glassdoor est.),Loopback Analytics4.4 ★,"Dallas, TX","This employer will not sponsor applicants for employment visa status (e.g., H1-B) for this position. All applicants must be currently authorized to work in the United States on a full-time basis.
Come join our Real World Data team at Loopback Analytics, the Loopback platform assembles clinical, pharmacy, enterprise and social data for insight and action across the specialty pharmacy and life sciences value chain. The ideal candidate would be an experienced Data Engineer who will be responsible for building and maintaining data pipelines. The Data Engineer will facilitate deeper analysis and reporting across complex data sets to support customers.
Job Duties to Include
Assemble and manage large, complex sets of data to meet functional business and analytical requirements
Build required infrastructure, documentation and roadmap for optimal extraction, transformation and loading of data from various data sources
Design infrastructure for greater scalability, optimizing data delivery and automating manual processes
Plan, coordinate and implement security measures to safeguard data
Work with stakeholders including data, product and executive teams and assist with data-related technical issues
Develop and maintain processes for data profiling, data documentation, and data quality measurement leveraging both manual and automated data quality testing
Requirements
Technical Experience: 3-5 years of experience to include:
Implementing and designing data infrastructure to support data curation and data analysis
Orchestrating data transformation through cloud native analytics platforms (Snowflake, Databricks) across cloud environments (Azure, AWS, GCP)
Building and modeling data in relational and non-relational data storage technologies including schema design, stored procedure development and performance and optimization techniques (e.g. SQL & NoSQL, C#, Python, etc.)
Learning and understanding the various technical domains across the enterprise and able to communicate complex technical and business concepts across the enterprise and various business stakeholders
Documenting and testing of designed solutions
Writing code that runs in a production system or experience in machine learning
Required Education:
Bachelors, masters, or Ph.D. in computer science, software engineering or a related field or equivalent experience
Personal Characteristics:
Complex problem solver
Excellent program/task organizational skills
Detail and results oriented
Excellent communication skills
Travel:
Minimal
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, or national origin. For immediate full-time consideration, please forward your resume to Loopback Analytics via email at careers@loopbackanalytics.com.
About Loopback
Founded in 2009, Loopback was rated as one of the best places to work in Dallas by the DBJ. Loopback Analytics is a leading provider of data-driven solutions for hospitals and health systems. The company’s comprehensive analytics platform drives growth for specialty and ambulatory pharmacy programs while connecting pharmacy activities with clinical and economic outcomes. Loopback’s clients include leading academic medical centers, health systems, and life sciences companies. For more information about our company and services please visit our website at www.loopbackanalytics.com.
Show Less
Report",4.4,51 to 200 Employees,2009,Company - Private,Health Care Services & Hospitals,Healthcare,Unknown / Non-Applicable
GCP Data Engineer,$70.00 - $75.00 Per Hour (Employer est.),Apolis3.9 ★,"Dearborn, MI","Desired Qualifications & Experiences:
• Five or more years’ experience in software engineering.
• Five or more years’ experience in large scale RDBMS environments or Google BigQuery
• Two or more years of Exadata experience OR Google BigQuery
• Four or more years’ experience with Informatica PowerCenter or IICS
• One or more years experience in Erwin
• Experience in code automation (e.g. pattern based integration)
• Experience in advanced SQL and PL/SQL techniques
Experience in building re-usable Utility packages
Experience with testing the code
Experience in Unix shell and Python scripting
Integration design & data modeling skills in Data lake and Data Warehousing environments
Exposure to both on-prem and cloud Integration solutions
Familiarity with non-relational DB technologies is a plus
Experience with automated testing
Experience with both batch and real-time patterns for integrations
Ability to build and analyze complex integration workflows from heterogeneous data sources
Experienced in large Enterprise Data Warehouse & Integration projects.
Strong background in full lifecycle development using multiple platforms or languages.
Ability to interact at a technical and non-technical level with Infrastructure, Network, Development, BA and QA teams.
Development experience in high transaction/high availability systems.
Experience with analyzing and recommending solutions for Production issues short-term and long term
Job Type: Contract
Salary: $70.00 - $75.00 per hour
Experience:
Informatica Power Center: 1 year (Preferred)
SQL: 1 year (Preferred)
GCP: 1 year (Preferred)
BigQuery: 1 year (Required)
Python: 1 year (Preferred)
Work Location: In person
Show Less
Report",3.9,501 to 1000 Employees,1996,Company - Private,Information Technology Support Services,Information Technology,$25 to $100 million (USD)
Remote Data Engineer,$120K - $140K (Employer est.),LookFar Labs,United States,"Remote AWS Data Engineer
We have an existing commercial SaaS platform that consists of 3 components: a web application, several 3rd party databases integrated into our backend, and a Natural Language Processing ML model based on a custom taxonomy.
We are looking to build 2.0 of our platform, with a brand new front end based on new algorithms, and scalable data science models that use a confluence of data from various data sources (e.g., patent, financial, and people). Its a challenge and a fun opportunity for someone looking to make the next big platform that the world is going to use.
Our Data Engineer would need to create a new data pipeline, ETL process, and architecture for 2.0 of our platform. This could include multi-modal databases, and should consider the delineation between production, development, and staging/testing data pipelines and environments.
The data pipeline should easily integrate new data sources, with both structured and unstructured data, and should enable associations between data as well. It should also enable and further enhance the strong entity resolution that we have already started building for our disparate, large data sets to be cleanly integrated.
You should also not rely solely on off the shelf tools or default pipelines. This role will require creativity and customization.
Your solutions should keep in mind scalability, to enable optimized usage of distributed computing frameworks like Spark. You should also have strong familiarity and experience with how to leverage the AWS ecosystem to bring in relevant AWS tools, services, and resources to enable substantial processing of very large datasets before runtime, entity resolution between very large datasets, and real-time processing in a scalable, distributed computing environment.
Role Responsibilities:
Create and maintain a scalable ETL data pipeline that ingests multiple large data sets of both structured data (in the form of financial and patent data) andunstructured data (in the form of white papers, scraped websites, etc.), and enablesentity resolution and other transformations for clean data integration and usage
Create and maintain a multi-modal data storage system that enables scalable, real-time processing for production-level data
Work with the data science team to enable ML Ops
Have curiosity and passion for data, and demonstrate strong and extensiveunderstanding of our data, including ability to efficiently query and obtain data viaSQL
Demonstrate a strong sense of ownership, of both technical and business outcomes
Assist dev and data science teams with processing and integrating data analysis
Clearly document processes, methodologies, and tools usedExperience
Required:
B.S. in relevant technical degree
Significant use and experience (at least 3-5 years) as a data engineer in the AWS ecosystem, including strong familiarity with structured and unstructured large datasets, enabling scalable and distributed compute, and ensuring real-time processingat scale
Significant use and experience (at least 3-5 years) with writing complex SQL queries and analysis of data correlations
Significant experience (at least 3-5 years) with the AWS ecosystem, including tools, services, and resources that enable scalable, distributed computing
Project management skills, ability to scope out timeline, methodology, and deliverables for development, testing, and integration into the platform
Excellent communication and story-telling skills (written and verbal)
Our Current Tech Stack:
AWS to host the infrastructure, including the CICD, SpringBoot, Angular, Python, PySpark, Kubernetes, EMR, Spark, Elasticsearch, RedShift, AWS (S3, Code Commit, Code Build, Code Deploy, EC2, EMR, etc.), Docker, Spacy, Scikit learn, Openpyxl, Streamlit, Watchdog, sklearn, seaborn, nltk, matplotlib, pandas, SQLAlchemy, and additional ML and python libraries. This stack is subject to change as we build v2.0.
We want to modernize and streamline our models, MLOps, code, deployment, front-end, and distributed processing capabilities.
Logistics: Geography, Work Status, Etc.
The position is full-time on a W2 and fully remote. The candidate must have the legal right to work in the United States.
Interview Process:
We will conduct 3 rounds of interviews.
First Round: Culture, fit, and background interview with the Founders
Second Round: Technical Interview
Technical Project: Execute a small data engineering project, if selected for the third round of interview
Third Round: In-Person Day in Washington D.C. (We will have the candidate fly out to D.C. to meet the founders and team.) Present the results of the data engineeringproject during the In-Perso Day. How to Apply: Please provide the following: Resume
Cover Letter
Any links to Git repositories or data engineering projects that we can review
About the Company:
We are the source of truth for patent intelligence. Patents protect revenue and investment in the market. Given that, patent intelligence is not complete UNLESS it integrates financial and market data. We provide SaaS platforms that correlate multiple data sets (patent, financial, and people data) using scalable data science models, in order to answer fundamental questions related to patent and innovation strategy.
We provide patent intelligence to corporate IP departments and the defense sector. We are expanding to a larger commercial market, including technology transfer, venture capital, and financial institutions.
We are committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, pregnancy, disability, age, veteran status, or other characteristics.
Job Type: Full-time
Salary: $120,000.00 - $140,000.00 per year
Benefits:
Dental insurance
Health insurance
Paid time off
Vision insurance
Experience level:
4 years
Schedule:
Monday to Friday
Application Question(s):
Do you now or in the future need work authorization sponsorship?
If selected for the final interview round, are you able to fly to Washington, DC for an in-person interview (all travel expenses paid)?
Experience:
complex SQL queries and analysis: 3 years (Required)
AWS: 3 years (Required)
ML Ops: 3 years (Required)
building scalable ETL data pipelines: 3 years (Required)
Spark framework: 2 years (Required)
Work Location: Remote
Show Less
Report",-1,Unknown,2014,Company - Private,Software Development,Information Technology,Unknown / Non-Applicable
Data Engineer,$72K - $106K (Glassdoor est.),"Naval Systems, Inc.3.9 ★","District Heights, MD","Description: NSI requires a Data Engineer to support the contracted efforts. The Data Engineer will develop and implement a set of techniques and analytics applications to transform raw data into meaningful information using data-oriented programming languages and visualization software. The position is mainly focused on Raw Data File (RDF) smart aircraft data downloaded after an aircraft sortie, then loaded into an information system for analysis. The data engineer will be designing and maintaining the pipelines for loading the data and developing queries/procedures to support analyst requirements. Additional projects will include Condition-Based Maintenance+ and Health Usage and Monitoring System data pipeline engineering and warehousing.
Location: Washington, DC, Norfolk, VA, Philadelphia, PA
Education: Bachelor’s Degree in computer science, engineering, mathematics, statistics, business or a similar field.
Certifications: Current Network+, Security+, or higher as defined by DoD CIO Information Assurance (IA) Certification requirements
Experience: Three (3) years of relevant experience in data using architecture, data engineering, data hub / data warehouse development. Three (3) years of relevant experience in utilizing data management, enterprise repository, data modeling, data quality and data mapping tools. Experience with Databricks required.
Security Clearance: Secret Clearance is Required. Must be U.S. citizen.
Special Notes/Instructions: NSI is a privately held, small but quickly growing company with headquarters in Lexington Park, Maryland within 5 miles of the Patuxent River Naval Air Station. Established in 2004, we are now celebrating 19 years of excellence in providing quality products and services to the Department of Defense. Our benefits package includes medical, dental, vision, Long Term Disability, Life Insurance, Short Term Disability, paid time off, paid holidays, flexible spending account, employee assistance program, tuition assistance program, 401k Plan with company match as well as a fun and enthusiastic work environment!
To Apply: NSI offers a team-oriented work environment and a competitive compensation and employee benefits package. If you have a commitment to excellence and want to join our team of top caliber professionals, we invite you to submit your resume electronically by visiting our careers website at: https://n-s-i.us/careers/apply/.
Quality, Integrity, Teamwork, Success – that's NSI!
NSI is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
XJ6
Show Less
Report",3.9,201 to 500 Employees,2004,Company - Private,Aerospace & Defense,Aerospace & Defense,Unknown / Non-Applicable
Data Engineer I,$77K - $101K (Glassdoor est.),CSX2.9 ★,"Jacksonville, FL","Job Summary
The Data Engineer is responsible for the Cloud and on-prem data creation, maintenance, improvement, cleaning, and manipulation of data in the business’ operational and analytics databases. The Data Engineer works with the applications teams, business partners, and data & analytics team to understand and aid in the implementation of database requirements, analyze performance, and troubleshoot any existing issues. The data engineer maintains the optimal Azure cloud data pipeline architecture, assembles data sets that meets functional / non-functional business requirements and performs optimal extraction, transformation, and loading of data from a variety of data sources using SQL and/or other data technologies. This person will be responsible for delivering cloud analytical tools, solutions, and reporting. Applicants will be required to engage in ongoing background checks throughout the duration of this position with continued passing results.
Primary Activities and Responsibilities
Collaborate with other IT partners within the company.