How to put machine learning models into production (2023)

Data scientists excel at creating models that represent and predict real-world data, but effectively deploying machine learning models is more of an art than science. Deployment requires skills more commonly found in software engineering and DevOps. Venturebeat reports that 87% of data science projects never make it to production, while redapt claims it is 90%. Both highlight that a critical factor which makes the difference between success and failure is the ability to collaborate and iterate as a team.

The goal of building a machine learning model is to solve a problem, and a machine learning model can only do so when it is in production and actively in use by consumers. As such, model deployment is as important as model building. As Redapt points out, there can be a “disconnect between IT and data science. IT tends to stay focused on making things available and stable. They want uptime at all costs. Data scientists, on the other hand, are focused on iteration and experimentation. They want to break things.” Bridging the gap between those two worlds is key to ensuring you have a good model and can actually put it into production.

Most data scientists feel that model deployment is a software engineering task and should be handled by software engineers because the required skills are more closely aligned with their day-to-day work. While this is somewhat true, data scientists who learn these skills will have an advantage, especially in lean organizations. Tools like TFX, Mlflow, Kubeflow can simplify the whole process of model deployment, and data scientists can (and should) quickly learn and use them.

The difficulties in model deployment and management have given rise to a new, specialized role: the machine learning engineer. Machine learning engineers are closer to software engineers than typical data scientists, and as such, they are the ideal candidate to put models into production. But not every company has the luxury of hiring specialized engineers just to deploy models. For today’s lean engineering shop, it is advisable that data scientists learn how to get their models into production.

In all this, another question looms — what is the most effective way to put machine learning models into production?

This question is critical, because machine learning promises lots of potential for businesses, and any company that can quickly and effectively get their models to production can outshine their competitors.

In this article, I’m going to talk about some of the practices and methods that will help get machine learning models in production. I’ll discuss different techniques and use cases, as well as the pros and cons of each method.

So without wasting any more time, let’s get to it!

From model to production

Many teams embark on machine learning projects without a production plan, an approach that often leads to serious problems when it’s time to deploy. It is both expensive and time-consuming to create models, and you should not invest in an ML project if you have no plan to put it in production, except of course when doing pure research. With a plan in hand, you won’t be surprised by any pitfalls that could derail your launch.

T are three key areas your team needs to consider before embarking on any ML projects are:

  1. Data storage and retrieval
  2. Frameworks and tooling
  3. Feedback and iteration

Data storage and retrieval

A machine learning model is of no use to anyone if it doesn’t have any data associated with it. You’ll likely have training, evaluation, testing, and even prediction data sets. You need to answer questions like:

  • How is your training data stored?
  • How large is your data?
  • How will you retrieve the data for training?
  • How will you retrieve data for prediction?

These questions are important as they will guide you on what frameworks or tools to use, how to approach your problem, and how to design your ML model. Before you do anything else in a machine learning project, think about these data questions.

Data can be stored in on-premise, in cloud storage, or in a hybrid of the two. It makes sense to store your data where the model training will occur and the results will be served: on-premise model training and serving will be best suited for on-premise data especially if the data is large, while data stored in cloud storage systems like GCS, AWS S3, or Azure storage should be matched with cloud ML training and serving.

(Video) How to deploy machine learning models into production

The size of your data also matters a lot. If your dataset is large, then you need more computing power for preprocessing steps as well as model optimization phases. This means you either have to plan for more compute if you’re operating locally, or set up auto-scaling in a cloud environment from the start. Remember, either of these can get expensive if you haven’t thought through your data needs, so pre-plan to make sure your budget can support the model through both training and production

Even if you have your training data stored together with the model to be trained, you still need to consider how that data will be retrieved and processed. Here the question of batch vs. real-time data retrieval comes to mind, and this has to be considered before designing the ML system. Batch data retrieval means that data is retrieved in chunks from a storage system while real-time data retrieval means that data is retrieved as soon as it is available.

Along with training data retrieval, you will also need to think about prediction data retrieval. Your prediction data is TK (define it relative to training data) and it is rarely as neatly packaged as the training data, so you need to consider a few more issues related to how your model will receive data at inference time:

  • Are you getting inference data from webpages?
  • Are you receiving prediction requests from APIs?
  • Are you making batch or real-time predictions?

and so on.

If you’re getting data from webpages, the question then is what type of data? Data from users in webpages could be structured data (CSVs, JSON) or unstructured data (images, videos, sound), and the inference engine should be robust enough to retrieve, process, and to make predictions. Inference data from web pages may be very sensitive to users, and as such, you must take into consideration things like privacy and ethics. Here, frameworks like Federated Learning, where the model is brought to the data and the data never leaves webpages/users, can be considered.

Another issue here has to do with data quality. Data used for inference will often be very different from training data, especially when it is coming directly from end-users not APIs. Therefore you must provide the necessary infrastructure to fully automate the detection of changes as well as the processing of this new data.

As with retrieval, you need to consider whether inference is done in batches or in real-time. These two scenarios require different approaches, as the technology/skill involved may be different. For batch inference, you might want to save a prediction request to a central store and then make inferences after a designated period, while in real-time, prediction is performed as soon as the inference request is made.Knowing this will enable you to effectively plan when and how to schedule compute resources, as well as what tools to use.

Raising and answering questions relating to data storage and retrieval is important and will get you thinking about the right way to design your ML project.

Frameworks and tooling

Your model isn’t going to train, run, and deploy itself. For that, you need frameworks and tooling, software and hardware that help you effectively deploy ML models. These can be frameworks like Tensorflow, Pytorch, and Scikit-Learn for training models, programming languages like Python, Java, and Go, and even cloud environments like AWS, GCP, and Azure.

After examining and preparing your use of data, the next line of thinking should consider what combination of frameworks and tools to use.

The choice of framework is very important, as it can decide the continuity, maintenance, and use of a model. In this step, you must answer the following questions:

  • What is the best tool for the task at hand?
  • Are the choice of tools open-source or closed?
  • How many platforms/targets support the tool?

To help determine the best tool for the task, you should research and compare findings for different tools that perform the same job. For instance, you can compare these tools based on criteria like:

Efficiency: How efficient is the framework or tool in production? A framework or tool is efficient if it optimally uses resources like memory, CPU, or time. It is important to consider the efficiency of Frameworks or tools you intend to use because they have a direct effect on project performance, reliability, and stability.

(Video) Five Steps for Deploying Machine Learning Models Into Production

Popularity: How popular is the tool in the developer community? Popularity often means it works well, is actively in use, and has a lot of support. It is also worth mentioning that there may be newer tools that are less popular but more efficient than popular ones, especially for closed-source, proprietary tools. You’ll need to weigh that when picking a proprietary tool to use. Generally, in open source projects, you’d lean to popular and more mature tools for reasons I’ll discuss below.

Support: How is support for the framework or tool? Does it have a vibrant community behind it if it is open-sourced, or does it have good support for closed-source tools?How fast can you find tips, tricks, tutorials, and other use cases in actual projects?

Next, you also need to know whether the tools or framework you have selected is open-source or not. There are pros and cons to this, and the answer will depend on things like budget, support, continuity, community, and so on. Sometimes, you can get a proprietary build of open-source software, which means you get the benefits of open source plus premium support.

One more question you need to answer is how many platforms/targets does your choice of framework support? That is, does your choice of framework support popular platforms like the web or mobile environments? Does it run on Windows, Linux, or Mac OS? Is it easy to customize or implement in this target environment? These questions are important as there can be many tools available to research and experiment on a project, but few tools that adequately support your model while in production.

Feedback and iteration

ML projects are never static. This is part of engineering and design that must be considered from the start. Here you should answer questions like:

  • How do we get feedback from a model in production?
  • How do you set up continuous delivery?

Getting feedback from a model in production is very important. Actively tracking and monitoring model state can warn you in cases of model performance depreciation/decay, bias creep, or even data skew and drift. This will ensure that such problems are quickly addressed before the end-user notices.

Consider how to experiment on, retrain, and deploy new models in production without bringing that model down or otherwise interrupting its operation. A new model should be properly tested before it is used to replace the old one. This idea of continuous testing and deploying new models without interrupting the existing model processes is called continuous integration.

There are many other issues when getting a model into production, and this article is not law, but I’m confident that most of the questions you’ll ask falls under one of the categories stated above.

An example of machine learning deployment

Now, I’m going to walk you through a sample ML project. In this project,you’re an ML engineer working on a promising project, and you want to design a fail-proof system that can effectively put, monitor, track, and deploy an ML model.

Consider Adstocrat, an advertising agency that provides online companies with efficient ad tracking and monitoring. They have worked with big companies and have recently gotten a contract to build a machine learning system to predict if customers will click on an ad shown on a webpage or not. The contractors have a large volume dataset in a Google Cloud Storage (GCS) bucket and want Adstocrat to develop an end-to-end ML system for them.

As the engineer in charge, you have to come up with a design solution before the project kicks off. To approach this problem, ask each of the questions asked earlier and develop a design for this end-to-end system.

Data concerns

First, let’s talk about the data. How is your training data stored?

The data is stored in a GCS bucket and comes in two forms. The first is a CSV file describing the ad, and the second is the corresponding image of the ad. The data is already in the cloud, so it may be better to build your ML system in the cloud. You’ll get better latency for I/O, easy scaling as data becomes larger (hundreds of gigabytes), and quick setup and configuration for any additional GPUs and TPUs.

(Video) Necessary Steps to Bring Machine Learning Models to Production

How large is your data?

The contractor serves millions of ads every month, and the data is aggregated and stored in the cloud bucket at the end of every month. So now you know your data is large (hundreds of gigabytes of images), so your hunch of building your system in the cloud is stronger.

How will you retrieve the data for training?

Since data is stored in the GCS bucket, it can be easily retrieved and consumed by models built on the Google Cloud Platform. So now you have an idea of which cloud provider to use.

How will you retrieve data for prediction?

In terms of inference data, the contractors informed you that inference will be requested by their internal API, as such data for prediction will be called by a REST API. This gives you an idea of the target platform for the project.

Frameworks and tools for the project

There are many combinations of tools you can use at this stage, and the choice of one tool may affect the others. In terms of programming languages for prototyping, model building, and deployment, you can decide to choose the same language for these three stages or use different ones according to your research findings. For instance, Java is a very efficient language for backend programming, but cannot be compared to a versatile language like Python when it comes to machine learning.

After consideration, you decide to use Python as your programming language, Tensorflow for model building because you will be working with a large dataset that includes images, and Tensorflow Extended (TFX), an open-source tool released and used internally at Google, for building your pipelines. What about the other aspects of the model building like model analysis, monitoring, serving, and so on? What tools do you use here? Well, TFX pretty much covers it all!

TFX provides a bunch of frameworks, libraries, and components for defining, launching, and monitoring machine learning models in production. The components available in TFX let you build efficient ML pipelines specifically designed to scale from the start. These components has built-in support for ML modeling, training, serving, and even managing deployments to different targets.

How to put machine learning models into production (1)

TFX is also compatible with our choice of programming language (Python), as well as your choice of deep learning model builder (Tensorflow), and this will encourage consistency across your team. Also, since TFX and Tensorflow were built by Google, it has first-class support in the Google Cloud Platform. And remember, your data is stored in GCS.

If you want the technical details on how to build a complete end-to-end pipeline with TFX, see the links below:

TensorFlow Extended (TFX) | ML Production Pipelines

Build and manage end-to-end production ML pipelines. TFX components enable scalable, high-performance data processing…

(Video) How to Deploy Machine Learning Models into Production Easily

The TensorFlow Blog

Creating Sounds Of India: An on device, AI powered, musical experience built with TensorFlow August 14, 2020 – Posted…

Are the choice of tools open-source or closed?

Python and TFX and Tensorflow are all open-source, and they are the major tools for building your system. In terms of computing power and storage, you are using all GCP which is a paid and managed cloud service. This has its pros and cons and may depend on your use case as well. Some of the pros to consider when considering using managed cloud services are:

  • They are cost-efficient
  • Quick setup and deployment
  • Efficient backup and recovery

Some of the cons are:

  • Security issue, especially for sensitive data
  • Internet connectivity may affect work since everything runs online
  • Recurring costs
  • Limited control over tools

In general, for smaller businesses like startups, it is usually cheaper and better to use managed cloud services for your projects.

How many platforms/targets support the tool?

TFX and Tensorflow run anywhere Python runs, and that’s a lot of places. Also, models built with Tensorflow can easily be saved and served in the browsers using Tensorflow.js, in mobile devices and IoT using Tensorflow lite, in the cloud, and even on-prem.

Feedback and Iteration concerns

How do we get feedback from a model in production?

TFX supports a feedback mechanism that can be easily used to manage model versioning as well as rolling out new models. Custom feedback can be built around this tool to effectively track models in production. A TFX Component called TensorFlow Model Analysis (TFMA) allows you to easily evaluate new models against current ones before deployment.

Looking back at the answers above, you can already begin to picture what your final ML system design will look like. And getting this part before model building or data exploration is very important.


Effectively putting an ML model in production does not have to be hard if all the boxes are ticked before embarking on a project. This is very important in an ML project you’ll embark on and should be prioritized!

While this post is not exhaustive, I hope it has provided you with a guide and intuition on how to approach an ML project to put it in production.

(Video) Data Science 101: Deploying your Machine Learning Model

Thanks for reading! See you again another time.

Tags: data science, machine learning, tensorflow


How do you integrate machine learning models into production? ›

Deploy your first ML model to production with a simple tech stack
  1. Training a machine learning model on a local system.
  2. Wrapping the inference logic into a flask application.
  3. Using docker to containerize the flask application.
  4. Hosting the docker container on an AWS ec2 instance and consuming the web-service.

How you test your ML models for production scale? ›

4 steps model testing:
  1. Local development. Model development could often start with a hypothesis, say. ...
  2. Testing in CI/CD. The second step in the ML model testing I recommend you to implement is testing as part of CI/CD. ...
  3. Stage testing / Shadow testing. ...
  4. A/B test.
23 Mar 2020

What are some best practices for optimizing the performance of machine learning models in a production setting? ›

Here are the 5 best practices
  • Data Assessment. To start, data feasibility should be checked — Do we even have the right data sets to run machine learning models on top? ...
  • Evaluation of the right tech stack. ...
  • Robust Deployment approach. ...
  • Post deployment support & testing. ...
  • Change management & communication.
3 Aug 2022

How does machine learning work in production? ›

In production systems, machine learning is used to train models to make predictions that are used in the system. In some systems, those predictions are the very core of the system, whereas in others they provide only an auxiliary feature.

What must you do before you can deploy a model into production? ›

The following 6 steps will guide you through the process of deploying your machine learning model in production:
  • Create Watson ML Service.
  • Create a set of credentials for using the service.
  • Download the SDK.
  • Authenticate and Save the model.
  • Deploy the model.
  • Call the model.
4 Jan 2018

How do you deploy to production? ›

Deploy to Production: 5 Tips to Make It Smoother
  1. Automate As Much As Possible. ...
  2. Build and Pack Your Application Only Once. ...
  3. Deploy the Same Way All the Time. ...
  4. Deploy Using Feature Flags In Your Application. ...
  5. Deploy in Small Batches, and Do It Often.
13 Mar 2018

Can a ML model give 100% accuracy? ›

Once a machine learning model is trained and the training accuracy is calculated, so there might be a huge chance that the accuracy would result in a high range probably in the nineties or even 100%.

What problems you may face getting an ML model in production? ›

3 Challenges for ML Models in Production
  • No human intervention. ML models are highly likely to make better predictions than us. They work much faster and more scalable than us. ...
  • Data changes. The world constantly changes. ...
  • Communication between stakeholders. It takes different sets of skills to build an ML system.
9 Jun 2022

How can I make my ML model more robust? ›

Add regularization: Reduces variance, For Eg L1 and L2 regularization. Try different models: Can use a model that is more robust to outliers. For Eg, tree-based models(random forests, gradient boosting) are generally less affected by outliers than linear models.

What are the 7 key steps to build your machine learning model? ›

It can be broken down into 7 major steps :
  • Collecting Data: As you know, machines initially learn from the data that you give them. ...
  • Preparing the Data: After you have your data, you have to prepare it. ...
  • Choosing a Model: ...
  • Training the Model: ...
  • Evaluating the Model: ...
  • Parameter Tuning: ...
  • Making Predictions.
28 Oct 2022

Why are moving machine learning models to production so hard? ›

As the model moves forward to production, it is typically exposed to larger volumes of data and data transport modes. Your team will need several tools to both monitor and solve for the performance and scalability challenges that will show up over time.

How is AI used in production? ›

AI tools can process and interpret vast volumes of data from the production floor to spot patterns, analyze and predict consumer behavior, detect anomalies in production processes in real-time, and more.

What are the 3 main steps in the deployment process? ›

Software deployment process mainly consists of 3 stages: development, testing and monitoring.

What best practices do you consider when deploying a model to a production environment? ›

The following best practices can help you deploy software more effectively.
  • Keep Separate Clusters for Production and Non-Production.
  • Apply Resource Limits.
  • Collect Deployment Metrics.
  • Implement a Secrets Strategy.
  • Automate Database Updates.

How do you deploy to production without downtime? ›

How to Achieve Zero Downtime Deployment
  1. Step 1: Create the new version of your application that you want to deploy.
  2. Step 2: Deploy the new version of your application or service simultaneously with the current version.
  3. Step 3: Gradually migrate traffic and/or your database/users to the new version of the application.
21 Jun 2022

What are the five stages of deployment? ›

The Five Stages of Deployment

These stages are comprised as follows: pre-deployment, deployment, sustainment, re-deployment and post-deployment. Each stage is characterized both by a time frame and specific emotional challenges, which must be dealt with and mastered by each of the Family members.

What is the difference between deployment and production? ›

“Released”: A business term that defines functionality being available to an end-user. “Deployed” doesn't necessarily mean “Released”. “Production Ready” = A product Increment that is “Done” and potentially releasable to the end-user. “Ready for Release” is a synonym to Production Ready.

How do I move code from dev to production? ›

What is the process to move the process to Prod from Dev environment?
  1. Copy the process folder and Go to the production bot.
  2. Paste the process folder in Production bot and open the xaml file publish there.
  3. Add the process in Process tab in Production orchestrator and Use it.
10 Aug 2020

Is 70% accuracy good in machine learning? ›

Good accuracy in machine learning is subjective. But in our opinion, anything greater than 70% is a great model performance. In fact, an accuracy measure of anything between 70%-90% is not only ideal, it's realistic.

Why is training accuracy not 100%? ›

Please remember having 100% accuracy most likely indicates over learning / over fitting . The idea is to extract a pattern from training data so that it can have decent predictive performance on unseen input. Hundred percent accuracy could mean just a memorization of the training set and result in poor generalization.

What did you do when your machine learning model did not perform as expected in production? ›

Some of the models are often highly unstable and do not perform that well with time. In such cases, the business might demand high-frequency model revision and model monitoring. With higher lead time in model creation, businesses might start going back to intuition-based strategy.

What are the main 3 types of ML models? ›

Amazon ML supports three types of ML models: binary classification, multiclass classification, and regression. The type of model you should choose depends on the type of target that you want to predict.

How do you increase precision in machine learning? ›

8 Methods to Boost the Accuracy of a Model
  1. Add more data. Having more data is always a good idea. ...
  2. Treat missing and Outlier values. ...
  3. Feature Engineering. ...
  4. Feature Selection. ...
  5. Multiple algorithms. ...
  6. Algorithm Tuning. ...
  7. Ensemble methods.
29 Dec 2015

How can I farm faster in ML? ›

Mobile Legends Ultimate Farming Guide
  1. Gold Lane. Gold Lane is very important for farming during the first few minutes of a match in Mobile Legends: Bang Bang. ...
  2. Do Rotations When There Are No Minions. ...
  3. Don't Forget to Kill Enemy Heroes. ...
  4. Steal your Enemies' Jungle Minions. ...
  5. Specific Lane Farming.
12 Dec 2021

How can data models improve accuracy? ›

  1. Method 1: Add more data samples. Data tells a story only if you have enough of it. ...
  2. Method 2: Look at the problem differently. ...
  3. Method 3: Add some context to your data. ...
  4. Method 4: Finetune your hyperparameter. ...
  5. Method 5: Train your model using cross-validation. ...
  6. Method 6: Experiment with a different algorithm. ...
  7. Takeaways.
17 Feb 2021

How many machine learning models make it to production? ›

A common complaint among ML teams, however, is that deploying ML models in production is a complicated process. It is such a widespread issue that some experts estimate that as many as 90 percent of ML models never make it into production in the first place.

What are the 5 production strategies? ›

The main strategies used in production planning and control are the chase strategy, level production, make-to-stock, and assemble to order.

What are the 3 production strategies? ›

Here are six of the different types of production strategies:
  • Assemble-to-order. Assemble-to-order (ATO) is a production strategy where companies produce products on a customer-order basis, storing their inventory as assembly-ready components. ...
  • Level production. ...
  • Chase strategy. ...
  • Make-to-stock. ...
  • Make-to-order. ...
  • Engineer-to-order.
1 Dec 2021

What are the 5 steps in the production process? ›

Production Planning in 5 Steps
  • Step 1: forecast the demand of your product.
  • Step 2: determine potential options for production.
  • Step 3: choose the option for production that use the combination of resources more effectively.
  • Step 4: monitor and control.
  • Step 5: Adjust.

What is the most common danger for machine learning? ›

Model stealing is one of the most important security risks in machine learning. Model stealing techniques are used to create a clone model based on information or data used in the training of a base model.

Can you name 4 of the main challenges in machine learning? ›

Noisy data, incomplete data, inaccurate data, and unclean data lead to less accuracy in classification and low-quality results. Hence, data quality can also be considered as a major common problem while processing machine learning algorithms.

What are the five limitations of machine learning? ›

5 key limitations of machine learning algorithms
  • Ethical concerns. There are, of course, many advantages to trusting algorithms. ...
  • Deterministic problems. ...
  • Lack of Data. ...
  • Lack of interpretability. ...
  • Lack of reproducibility. ...
  • With all its limitations, is ML worth using?
18 Mar 2022

What is the hardest part of machine learning? ›

The reinforcement learning is hardest part of machine learning. The most important results in deep learning such as image classification so far were obtained by supervised learning or unsupervised learning.

Which algorithm is best for prediction? ›

Regression and classification algorithms are the most popular options for predicting values, identifying similarities, and discovering unusual data patterns.
  • Naive Bayes algorithm.
  • KNN classification algorithm.
  • K-Means.
  • Random forest algorithm.
  • Artificial neural networks (ANNs)
  • Recurrent neural networks (RNNs)
  • Takeaways.
30 May 2022

What are 2 main types of machine learning algorithm? ›

There are four types of machine learning algorithms: supervised, semi-supervised, unsupervised and reinforcement.

What are the 2 types of learning in machine learning? ›

These are three types of machine learning: supervised learning, unsupervised learning, and reinforcement learning.

How is machine learning used in production? ›

Machine learning enables predictive maintenance by predicting equipment failures before they occur, scheduling timely maintenance, and reducing unnecessary downtime. Manufacturers spend far too much time fixing breakdowns instead of allocating resources for planned maintenance.

What are the 4 stages of AI process? ›

Here are some steps that organizations can take to move from a business intelligence strategy to a machine learning one.
  • Stage 1: Collect and prepare data. ...
  • Stage 2: Make sense of data. ...
  • Stage 3: Use data to answer questions. ...
  • Stage 4: Create predictive applications.
2 Oct 2018

How can AI be used to enhance productivity? ›

Using AI and machine learning, systems can test hundreds of mathematical models of production and outcome possibilities, and be more precise in their analysis and results. This is done while adapting to new information such as new product innovations, supply chain disruptions, or sudden changes in demand.

How is integration used in machine learning? ›

In the context of machine learning and data science, you might use integrals to calculate the area under the curve (for instance, to evaluate the performance of a model with the ROC curve, or to calculate probability from densities.

What is integration in machine learning? ›

Integration is all about the connecting and moving of data, so that it can be safely stored and used to help you run your business and make decisions.

Is machine learning used in manufacturing? ›

In the manufacturing sector, ML can be applied across the supply chain to deliver real business benefits such as: Improving operational efficiency and lowering costs, by using ML to optimise the factory floor.

How machine learning models are implemented into an app? ›

Table of contents
  1. Workflow of Machine Learning project on Android.
  2. Prerequisite to implement workflow of Machine Learning on Android.
  3. Hands-on Implementation of Machine Learning on Android.
  4. Building a Machine learning model.
  5. Built Flask API.
  6. Test Application using Postman.
  7. Create Android App.
  8. Connectivity of API to Android APP.
17 Nov 2021

What are the 5 system integration methods? ›

We'll discuss the pros and cons of each type and when to use each one.
  • Manual data integration. ...
  • Middleware data integration. ...
  • Application-based integration. ...
  • Uniform access integration. ...
  • Common storage integration (sometimes referred to as data warehousing)

What are the three integration methods? ›

The different methods of integration include: Integration by Substitution. Integration by Parts. Integration Using Trigonometric Identities.

What are the 4 types of integration? ›

The main types of integration are:
  • Backward vertical integration.
  • Conglomerate integration.
  • Forward vertical integration.
  • Horizontal integration.
22 Mar 2021

What are the 4 steps of integration? ›

4 Steps to a Successful Integration Process
  • People. Acquisitions rise and fall on the quality and dedication of the people called upon to carry them out. ...
  • Customers. ...
  • Culture. ...
  • Communication.
26 Apr 2018

Which industry will gain most from machine learning? ›

AI and ML algorithms offer great potential in the finance industry. These algorithms are self-learning and can be extremely valuable to both the customer and the financial organization if fed the right data.

Which industry uses machine learning the most? ›

  • 5 Industries that heavily rely on Artificial Intelligence and Machine Learning. The world is headed towards the technology to read the inner voice that runs in your mind, to experience the most effective and faster way to accomplish a task. ...
  • Transportation. ...
  • Healthcare. ...
  • Finance. ...
  • Agriculture. ...
  • Retail and Customer Service.

Which company uses most machine learning? ›

Here are some examples of major companies using machine learning:
  • Yelp. Yelp hosts reviews from a large assortment of businesses all over the world. ...
  • Pinterest. Pinterest is a social media service that's a bit off-target from the norm. ...
  • 3. Facebook. ...
  • Twitter. ...
  • Google. ...
  • Baidu. ...
  • HubSpot. ...
  • IBM.

What are the 3 main types of machine learning tasks? ›

The three machine learning types are supervised, unsupervised, and reinforcement learning.


1. Keynote Presentation: Putting Machine Learning Models into Large Scale Production for Drug Discovery
2. Machine Learning in 5 Minutes: How to deploy a ML model (SurveyMonkey Engineer explains)
3. Machine Learning Models in Production
(DataWorks Summit)
4. How to Deploy and Productize Machine Learning Models
(Women Who Code)
5. Shawn Scully: Production and Beyond: Deploying and Managing Machine Learning Models
6. Putting Machine Learning into Production: An Overview — Srijith Rajamohan, Databricks
(Berkeley School of Information)
Top Articles
Latest Posts
Article information

Author: Saturnina Altenwerth DVM

Last Updated: 03/13/2023

Views: 6015

Rating: 4.3 / 5 (64 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Saturnina Altenwerth DVM

Birthday: 1992-08-21

Address: Apt. 237 662 Haag Mills, East Verenaport, MO 57071-5493

Phone: +331850833384

Job: District Real-Estate Architect

Hobby: Skateboarding, Taxidermy, Air sports, Painting, Knife making, Letterboxing, Inline skating

Introduction: My name is Saturnina Altenwerth DVM, I am a witty, perfect, combative, beautiful, determined, fancy, determined person who loves writing and wants to share my knowledge and understanding with you.