Por favor selecciona al menos una posición o competencia

Lugar

Distancia

Cualquier
Cualquier

Posición o Cargo

Competencias

Educación

Cancela
Búsqueda

Resultados de la búsqueda

Filtro
Resultados para:
Género
Edad
DistanciaCualquier

Software Engineer - Game Publishing - REMOTE

Our Client is seeking a Remote Software Engineer - Game Publishing. Role Description The Battle.net & Online Products organization is home to 300+ superpowered engineers, program managers, and designers focused on the technology that powers our Entertainment games. Whether you’re playing one of our titles, chatting with friends, or just shopping online, B&OP ensures that our players are immersed in engaging, exciting, and secure experiences. The Game Services Group develops the software, services and infrastructure that keeps millions of players online simultaneously worldwide, 24 hours a day, 365 days a year. When a player logs in, sends a friend request, a whisper, or a chat within one of our rich virtual worlds, Game Services powers these capabilities. When you use voice chat, check your profile statistics, or create a new social group, we are the team that makes those things possible. From Overwatch to Hearthstone, StarCraft 2 to Diablo 3, World of Warcraft to Heroes, regardless of the game, time zone, or scale, Game Services is ready to answer the call with effectiveness and professionalism, acting as the central pillar to supercharge all player engagement • Work with a small and talented team to develop scalable, highly performant platform services • Implement new features and services to support the needs of multiple teams • Participate in the ongoing effort to improve our platform infrastructure, with the goal of achieving ever increasing service availability • Perform research to acquire new knowledge necessary to perform assigned tasks and maintain a process of technological evolution • Develop unit and integration test code to validate service reliability Skills & Requirements • A degree in computer science, or a related field • A minimum of 3 years of relevant work experience • Ability to work in a collaborative environment • Excellent communication skills • Advanced understanding of C++ • Strong data-structure, logic, and algorithm skills • Experience with protocol and API design • Self-motivated • A desire to help make the service the best that it can be for our players Advantages • Proficient in at least one scripting language such as Python • Prior development work on distributed systems and client/server architectures • Experience with performance analysis and code optimization • Linux development experience (server applications, gdb debugging, etc.) • Knowledge of network and server security issues • Database development experience (MySQL, Oracle, Cassandra, etc.) • Enthusiastic about supporting a live service Top Skills: • C++ • Python • Kubernetes • Docker • Cloud

HAYS PLC • Los Angeles, U.S.

-

Lead Data Platform Engineer

We are looking for our first fully-dedicated Data Platform Engineer to collaborate with our growing team of Data Scientists and Analysts. As the lead data platform engineer you will be responsible for architecting our next gen data processing applications and reporting systems. Being the first dedicated data platform engineer, you will have the opportunity to build wholly new systems and services. We have some of the foundational elements in place for a modern data stack (Redshift, dbt, Segment, Kubernetes, Docker, Airbyte, Mode) and an active data science practice. We are looking for someone who can determine the next phase of the roadmap for the data platform and to lead us in building it -- for example, when do we want a different orchestration tool? When should we look to be less dependent on our current (3rd party) libraries for client app event tracking? We are looking to invest in ensuring that the core infrastructure on which all of our research depends is scalable, tested, robust, and able to evolve to meet our constantly changing data science needs. We are looking for a lead engineer who enjoys being product-minded in the sense that they own a product from beginning to end by designing, constructing, integrating, testing, documenting, and supporting their creations. Location: Cambridge, MA San Francisco, CA - NYC Compensation: Includes competitive salary, company stock options and health benefits. REMOTE UNTIL JANUARY 2022 then partial remote/office after. As a lead data platform engineer you will be responsible for: Designing, building, and supporting our next gen data processing applications and reporting systems through some combination of Python, Ruby, SQL, R, Go, and determining what tools we should build and what tools we should buyFollowing software engineering development practices for building scalable and highly secure applications / servicesCrafting optimal data processing architecture and systems for new data and ETL pipelines and driving the recommendation for improvements and modifications to existing data and ETL pipelines. (While you will certainly contribute to production ETL workflows, we expect you to spend more time building the tools and systems to enable data scientists, analysts, and other engineers to build the majority of the workflows.)Collaborating with infrastructure teams to improve data processing CI / CD practicesEvangelizing high quality data engineering practices towards building data infrastructure, pipelines at scale and fostering the next-gen state of art data warehouseAnalyzing extremely large data sets (tens of millions to billions of records) to identify, evaluate and prioritize new opportunities to grow and optimize the business through analytics and data science A strong candidate should be: Highly proficient in SQLFamiliar with Python, Ruby, and/or Golang, and should have deep proficiency with at least one of those languages.Experienced working with data pipelines in a cloud-native environment (bonus points for AWS experience)Able to write, test, ship, and maintain clean production code within a collaborative and version-controlled (git) codebase.

Craft Recruiting • San Francisco, U.S.

-

Senior DevOps Engineer

What you’ll do:We are looking for a talented DevOps Engineer (m/f/d) to join our skilled R&D team in order to evolve our proprietary engine for generating highly personalized advertising promotions, serving a large number of users. As a member of our Cloud Platform team you are responsible for building and operating our resilient and scalable platform on Google Cloud Platform, which include automating our infrastructure, building infrastructure products to boost developer productivity and own our incident and post-mortem processes. At Schwarz Media Platform, we believe in cross-functional, agile teams and you will regularly join our data-, software- and machine learning teams to work on cross-functional projects. You will be given responsibility and autonomy on how best to achieve this objective and collaborate with a cross-functional team of software developers, machine learning engineers, data engineers, and product managers who will support you. This role is crucial to help scale our vision of making marketing relevant and impactful!Deliver a part of our resilient, scalable & cost-effective cloud architecture on GCPDesign and implement infrastructure products (e.g. deployment, monitoring) to enable our feature teams to ship value to our customersWork on all levels of our platform (i.e. network, compute, storage, frameworks, software)Automate our infrastructure and processes with tools like TerraformIntroduce best practices for infrastructure and software securityManage and continuously evolve our incident management & post-mortem process to ensure availability and scalability of our platform according to service level objectivesEnjoy the autonomy in reaching your goals and plenty of opportunities for growth, as learning is crucial at SMPWhat you’ll bring along:Professional experience in building highly scalable architecture optimized for high availability, high data throughput, and low latencyProduction experience with cloud platforms like GCP (preferred) or AWSExperience with microservice architectures in production (Docker, Kubernetes, Helm)Knowledge of major configuration management systems as well as how to define infrastructure as codeNetworking and VPN solutionsProgramming (Python, Go) and scripting (Bash) skillsProfessional English (German is a plus) Modern container architecture and service meshes (e.g. Istio) (is a plus)Monitoring (Prometheus, Grafana, Alertmanager) (is a plus)Experience with software and infrastructure security (is a plus)CI/CD tools (ArgoCD, Google Cloud Build) (is a plus)Big data systems like Apache Spark, Apache Beam & Apache Kafka (is a plus)

Receptix • Berlin, Germany

-