The Cortex team is the core A.I. platform powering the vision of delivering the world’s best intelligent personal assistants to Walmart’s customers, accessible via natural voice commands, text messages, rich UI interactions, and a mix of all of the above via multi-modal experiences.
We believe /conversations/ are a natural and powerful user interface for interacting with technology and enable a richer customer experiences – both online and in-store. We are building and designing the next generation of Natural Language Understanding (NLU) services that other teams can easily integrate and leverage, and build rich experiences: from pure voice and text shopping assistants (Siri, Google Assistant, [[https://texttoshop.walmart.com/][Text to Shop]]), to customer care channels, to mobile apps with rich, intertwined, multi-modal interaction modes ([[https://apps.apple.com/us/app/me-walmart/id1459898418][Me@Walmart]]).
Ce que vous ferez…
Interested in diving in?
We need solid engineers with the talent and expertise required to
design, build, improve and evolve our capabilities in at least some of
the following areas:
Service oriented architecture in charge of exposing our NLU capabilities at scale, and enabling increasingly sophisticated model orchestration.
Since the service takes in traffic for a large set of Walmart customers (that is 80% of American households!), you will get to solve non trivial challenges in terms of service scalability and availability.
You will design and build the primitives to efficiently orchestrate model-serving microservices, taking into account their dependencies, and improving the /combined/ latency and robustness of such microservices (e.g. fan out in parallel to N services for a single request, and reply with whichever gives the fastest answer).
You will also bake-in functionality which can drive improved machine learning modeling and experimental design, such as A/B testing.
Model serving and operations
There is a constant tension between model improvements (more computations) and model serving latency. So, we are always in a quest of crunching more numbers, while preserving our SLAs, and controlling the operational costs.
You will guide our efforts to always find the best tradeoffs in terms of architecture, tooling (Tensorflow serving? / ONNYX? / Triton?) and infrastructure (CPU? / GPU?, GCP? / Azure?) for model serving – based on the latest model developments and product requirements.
In particular, you will drive principled and scientific load-testing efforts, to clearly identify the tradeoffs at hands, and tune/optimize the model-serving stack.
Tooling, infrastructure and pipelines for reproducible workflow and models, enabling rapid innovation across the entire product lifecycle.
You will author and maintain pipelines that safely build and deploy models to production via continuous deployment.
You will achieve scalable and efficient resource management capabilities (cloud infrastructure).
You will provide robust and built-in diagnostics for quality control throughout.
You will integrate – or build – labeling tools which can seamlessly integrate at the heart of our conversation data store (GCP, BigQuery) and intertwine multiple labeling sources of various confidence levels.
Come at the right time, and you will have an enormous opportunity to
make a massive impact on the design, architecture, and implementation
of an innovative, mission critical product, used every day, by people
you know, and which customers love.
As part of the emerging tech group, you will also have the additional
opportunity of building demos, proof of concepts, creating white
papers, writing blogs, etc.
Here are some of our team publications:
shopping”
https://medium.com/walmartglobaltech/building-a-conversational-assistant-platform-for-voice-enabled-shopping-6d174cdc4131
Shopping Assistant”
https://medium.com/walmartglobaltech/using-context-to-improve-intent-classification-in-walmarts-shopping-assistant-28f62d40fd17
https://medium.com/walmartglobaltech/making-walmarts-shopping-assistant-proactive-53a1764fcdee
Talk) – Vivek Kaul, Shankara B Subramanya, R2K Workshop at KR 2018
2020, Ghodrat Aalipour, Mohammed Samiul Saeef
https://graphconnect2020.sched.com/event/atkC/knowledge-graphs-for-ai-powered-shopping-assistants
by Using Inter-Utterance Context (Paper + Talk) – Arpit Sharma,
e-Commerce & NLP at ACL 2020
https://www.aclweb.org/anthology/2020.ecnlp-1.6/
Minimum Qualifications
Solid data skills, sound computer-science fundamentals, and strong programming experience.
Deep hands-on technical expertise in full-stack development.
Programming experience with at least one modern language with an efficient runtime, such as Scala, Java, C++, or C#.
Experience with at least one relational database technology such as MySQL, PostgreSQL, Oracle, or MS SQL.
Some level of fluency in Python (lingua-franca of our data-scientists).
Understanding of the challenge of distributed data-processing at scale.
Deal well with ambiguous/undefined problems; ability to think abstractly.
Ability to take a project from scoping requirements through actual launch.
A continuous drive to explore, improve, enhance, automate, and optimize systems and tools.
Capacity to apply scientific analysis and mathematical modeling techniques to predict, measure and evaluate the consequences of designs and the ongoing success of our platform.
Excellent oral and written communication skills.
Bachelor’s degree or certification in Computer Science, Engineering, Mathematics, or any other related field.
Preferred Qualifications
Large scale distributed systems experience, including scalability and fault tolerance.
Experience taking a leading role in building complex data-driven software systems successfully delivered to customers
Relentless focus on scalability, latency, performance robustness, and cost trade-offs – especially those present in highly virtualized, elastic, cloud-based environments.
Exposure to cloud infrastructure, such as Open Stack, Azure, GCP, or AWS as well as infrastructure management tech (Docker, Kubernetes)
Experience building/operating highly available systems of data extraction, ingestion, and massively parallel processing for large data sets. In particular experience in building large scale data pipelines using big data technologies (e.g. Spark / Kafka / Cassandra / Hadoop / Hive / BigQuery / Presto / Airflow).
Hands-on expertise in many disparate technologies, typically ranging from front-end user interfaces through to back-end systems and all points in between.
Familiarity with Machine Learning concepts & processes
Masters or PhD in Computer Science, Physics, Engineering, Math, or equivalent.
Indiquez ci-dessous les compétences minimales requises pour ce poste. Si aucune n’est indiquée, il n’y a pas de compétences minimales.
Age – 16 or older
Indiquez ci-dessous les compétences recherchées facultatives pour ce poste. Si aucune n’est indiquée, il n’y a pas de compétences recherchées.
Comme requis par la loi, Walmart offrira des accommodements pour les besoins des associés avec des incapacités.
Emplacement Principal…
1940 Argentia Rd, Mississauga, ON L5N 1P9, Canada
Do you want to have a real impact in the company for which you work? Do you like to take...
Apply For This JobCompany : Cogent Communications is a multinational, Tier 1 facilities-based ISP, consistently ranked as one of the top five networks...
Apply For This JobAbout Technitask Technitask is a Canadian company and a trusted provider of IT management, systems integration, software delivery, and consulting...
Apply For This JobBuild a meaningful career At LifeWorks, we offer more than career opportunities, we provide career opportunities to make meaningful contributions...
Apply For This JobJob Requisition Id: 175199 Business Function: Retail Primary City: Kinburn Province: Ontario Employment Type: On Call Employment Status: Term Language...
Apply For This JobAt EY, you’ll have the chance to build a career as unique as you are, with the global scale, support,...
Apply For This Job