Machine learning operations (MLOps) is about making machine learning development activities systematic and connecting the machine learning projects to the company’s business and IT infrastructure while bringing automation to the relevant parts of the machine learning workflow. MLOps is a way of productizing machine learning projects and taking into consideration the whole lifecycle all the way from the more experimental R&D phases to the deployment of production-ready models. The biggest potential is in scaling the machine learning activities from individual projects to the entire organization and across the whole company through common practices and by enabling developers to a larger extent than before.
In this blog post, I describe what MLOps means here at Silo AI, what we take into account when implementing MLOps projects with our customers, and give three examples of the MLOps projects that we have been working with.
We have discussed MLOps with Kaisa Salakka, Director, Research Labs, from Unity Technologies on our webinar and you might be interested in watching the webinar recording.
How we think about MLOps at Silo AI
From the software and solutions development point of view, code is used to control the solution behavior but machine learning is nowadays often needed for achieving many of the required elements and for adding solution adaptability based on available data.
Over the course of the last ten years, the DevOps practices have been adopted to bring software development, IT operations and business needs closer together and replace the individual development efforts and projects with more continuous delivery of products and product features. The same thing has happened with data with the emergence of DataOps processes in order for data to be handled systematically, especially in organizations that rely on big data for business intelligence and other analytics.
Depending on who you ask, MLOps is sometimes considered just as a way to train, deploy and track the machine learning models – however, in our view MLOps sits between DataOps and DevOps, including parts from both. In addition, MLOps covers many machine learning specific parts and – compared to the traditional software development – AI development requirements bring a lot of extra complexity to the development process.
The additional elements may include things like versioning data and ML models in addition to code and maintaining their common lineage for coordination and compliance purposes, automatically orchestrating multi-stage execution pipelines that are needed for training and testing ML model with big data, and organizing special higher-cost (GPU) computation for deep learning development in a centralized manner instead of relying on scattered and limited local compute available to the developers.
In our view, the AI products and solutions – and the R&D leading to them – are an interplay of data, software, and machine learning model development.
When looking from the AI maturity perspective, companies usually start with Proof-of-Concept projects to validate the AI use case. After a few successful PoCs, it will eventually become evident that some kind of MLOps practices are needed to achieve a sustainable, transparent and repeatable machine learning development workflow. Hence, MLOps is required to achieve a mature level of AI development and adoption across the organization.
Overall, at Silo AI, we consider a whole stack of elements that MLOps needs to support R&D and business use of AI solutions in a sustainable and scalable manner in various different business and client contexts. See the picture below for details.
What MLOps brings into an R&D organization
Because MLOps is DevOps way of working applied to the whole development lifecycle and end-to-end pipelines regarding machine learning, it considers not only the technology stack but also the processes and culture enabling the machine learning development across the organization. MLOps brings AI/ML development and traditional software development into dialogue and also helps to make sure that the AI endeavors address the business needs.
Effectively a good MLOps setup achieves the same for data scientists that a good DevOps setup achieves for software developers: they become more enabled and can have broad ownership over delivering new product features and products which leads to increased business value and ROI.
Build scalable AI solutions with Silo AI?
MLOps is still a rather new discipline and very few are experienced in this field and our customers often come to us for the best practises in this area. Here at Silo AI, we have helped enterprises to map and build their MLOps processes to scale AI adoption across projects in different kinds of technology setups and business domains. Below, you can find a few examples of how different teams of our experienced AI engineers, architects, and scientists have helped our customers to build up their AI capabilities with MLOps practices.
From MLOps exploration to an integrated team for global systems and software vendor
Together with the client, we started off by defining the requirements and technology options for a sensitive private cloud environment using f.ex. medical data. The outcome was a customized and standardized approach for machine learning model training automation that enables both consistent ML experimentation in research and AI feature delivery to the end products. The standardized machine learning workflow and tooling brought data scientists and DevOps professionals closer together enabling them to work towards shared targets and getting from research and PoC phases to product delivery.
Helping a global retail chain to scale AI usage across regions and use cases
We are working with a global consumer brand to enable the data science teams to scale the AI usage across their e-commerce store combining open source technologies with Azure cloud environment. Before Silo AI, there already were identified use cases for AI but the models were built on data science environments isolated from end products. We provided the client team experts that have defined and built a model deployment platform that enables AI product teams to deliver the AI-driven e-commerce features in connection to online systems.
From manual machine learning workflows to automated, scalable way-of-working
A data service provider had well-defined MLOps requirements but needed help with making technology choices and building up a solution that met their need for scalable and rapid machine learning model training and reporting. Silo AI experts built a model training infrastructure and automation to speed up their ML workflows using open source tooling and AWS cloud environment. Such setup also enables deeper collaboration between the data service provider and their clients as there are better possibilities to accommodate client-specific requirements through such a development environment.
If you are looking for a partner to build up your AI capabilities to the next level, contact me.