Driving efficiency with Machine Learning Operations (MLOps)
By LIBRA AI Technologies
MASTERMINE employs sophisticated machine learning and AI models within the intricate landscape of European mines, with the overarching objective of advancing digitalisation, fostering environmental sustainability, enhancing productivity monitoring, and garnering public acceptance. Recognising the criticality of optimal performance and expedited integration of MASTERMINE AI components into the mines' infrastructure, the project embraces modern cloud-based Machine Learning Operations (MLOps), methods and approaches.
What is MLOps?
Machine Learning Operations (MLOps) is a methodology that focuses on automating and standardising the processes involved in building, deploying, and operationalising Machine Learning (ML) models. It combines various practices and tools from both the realms of Machine Learning and operations processes. The primary goal of MLOps is to expedite the transition from model development to production.
Why apply MLOps?
The application of MLOps is pivotal for several reasons. Firstly, it standardises the ML workflow, thereby enhancing productivity, reproducibility, and cost-effectiveness. Additionally, MLOps facilitates improved monitorability of ML models, ensuring their performance can be continuously assessed and optimized. By implementing a comprehensive MLOps framework, the following can be achieved to the overall effectiveness and efficiency of managing AI and ML models:
• Version Control and Continuous Integration: MLOps involves version control of data, ML models, and code, as well as continuous integration and delivery pipelines. This ensures that changes to models or code are tracked, tested, and deployed efficiently.
• Continuous Monitoring and Evaluation: MLOps procedures ensure that AI models are continuously monitored and evaluated. This is crucial for maintaining model performance and ensuring that they meet expected levels of accuracy over time.
• Automated Deployment and Integration: MLOps enables automated deployment and integration of AI models into the existing infrastructure. This streamlines the deployment process and ensures seamless integration with other systems or applications.
• Regular Retraining with Fresh Data: MLOps facilitates the regular retraining of AI models with fresh and well-conditioned data. This helps in keeping the models up-to-date and adaptable to changing conditions or patterns in the data.
• Resilience and Safety: MLOps procedures ensure resilience to technological robustness and safety issues by providing mechanisms for error handling and fallback plans. This enhances the overall reliability and safety of the system.
MLOps in MASTERMINE
In MASTERMINE, LIBRA leads the development of a robust MLOps methodology and infrastructure meticulously tailored to the needs of the MASTERMINE ML and AI systems. By guaranteeing continuous integration and delivery, versioning and automatic model validation and assessment, the MLOps framework is expected to speed up the ML model development, and deployment to production process by 70%. The developed MLOps framework will support the ML project development standardisation process, exploiting the best practices for having production-ready ML solutions and support building, deploying and operationalising models. Key components of the AI model development and integration process consist of the following:
Versioning of code, ML models and other related artefacts.
Tracking of model development experiments.
Unified packaging format for developed ML models.
Centralized model registry with versioning semantics to programmatically access models.
Foundations to implement CI/CD pipelines for ML model training and deployment.
Automatic model performance assessment and monitoring in production.
Scheduled model retraining.
LIBRA will also set up an open-source technological framework to support the adopted MLOps approach.
MASTERMINE workshop on MLOps
To establish a robust MLOps methodology within the MASTERMINE project, active engagement with all partners in delivering ML and AI-based solutions is crucial. To facilitate collaboration and efforts alignment, LIBRA is currently organising an internal workshop dedicated to MLOps' best practices and tools.
The workshop will gather all technical partners to meticulously identify and document the specific needs of the ML models developed within the MASTERMINE project. This encompasses understanding the complex requirements spanning model training, deployment, monitoring, and maintenance. By thoroughly assessing these needs, essential insights for optimising the performance and effectiveness of the MASTERMINE ML and AI-based solutions and components will be gained.
What’s next?
After the workshop, LIBRA's focus will shift to setting up an open-source framework aimed at enhancing the project's MLOps approach. This framework will streamline operations by incorporating continuous integration and delivery, versioning, and automated model validation and assessment. Once implemented, the framework will provide MASTERMINE ML modelers with the necessary tools to efficiently manage ML models throughout the entire lifecycle. This means they will be able to handle tasks ranging from model development to deployment and maintenance with ease and efficiency.