Skip to main content

HOW TO LEARN MACHINE LEARNING FROM SCRATCH STEP BY STEP

HOW TO LEARN MACHINE LEARNING FROM SCRATCH STEP BY STEP



POINTS TO BE COVER IN THIS TOPIC;-
  • How can you learn machine learning from scratch step by step?
  • What concepts do you need to master in data science

So if you want to learn MACHINE LEARNING FROM SCRATCH STEP BY STEP. Then you have come to the right place, so after reading This Article about how to learn MACHINE LEARNING FROM SCRATCH STEP BY STEP, you will understand all the concepts of machine learning without wasting a little time's get started.

So when we think about MACHINE LEARNING what are the QUESTIONS & QUERIES That come into our mind
 
The QUESTION  that arrives into our Mind is:-

WHAT IS MACHINE LEARNING?

so a simple answer to the question that what is MACHINE LEARNING?
is Machine learning is the study of computer algorithms that improve automatically through experience and by the use of data. It is seen as a part of artificial intelligence.

 MACHINE LEARNING

Machine learning focuses on applications that learn from experience and improve their decision-making or predictive accuracy over time.



WHAT IS MACHINE LEARNING?

Machine learning is a branch of artificial intelligence (AI) focused on building applications that learn from data and improve their accuracy over time without being programmed to do so. 

In data science, an algorithm is a sequence of statistical processing steps. In machine learning, algorithms are 'trained' to find patterns and features in massive amounts of data to make decisions and predictions based on new data. The better the algorithm, the more accurate the decisions and predictions will become as it processes more data.

Today, examples of machine learning are all around us. Digital assistants search the web and play music in response to our voice commands. Websites recommend products and movies and songs based on what we bought, watched or listened to before. Robots vacuum our floors while we do . . . something better with our time. Spam detectors stop unwanted emails from reaching our inboxes. Medical image analysis systems help doctors spot tumors they might have missed. And the first self-driving cars are hitting the road.

We can expect more. As big data keeps getting bigger, as computing becomes more powerful and affordable, and as data scientists keep developing more capable algorithms, machine learning will drive greater and greater efficiency in our personal and work lives

HOW MACHINE LEARNING WORKS

There are four basic steps for building a machine learning application (or model). These are typically performed by data scientists working closely with the business professionals for whom the model is being developed.

Step 1: Select and prepare a training data set

Training data is a data set representative of the data the machine learning model will ingest to solve the problem it’s designed to solve. In some cases, the training data is labeled data—‘tagged’ to call out features and classifications the model will need to identify. Other data is unlabeled, and the model will need to extract those features and assign classifications on its own.

In either case, the training data needs to be properly prepared—randomized, de-duped and checked for imbalances or biases that could impact the training. It should also be divided into two subsets: the training subset, which will be used to train the application, and the evaluation subset, used to test and refine it.

Step 2: Choose an algorithm to run on the training data set

Again, an algorithm is a set of statistical processing steps. The type of algorithm depends on the type (labeled or unlabeled) and amount of data in the training data set and on the type of problem to be solved.

Common types of machine learning algorithms for use with labeled data include the following:

  • Regression algorithms: Linear and logistic regression are examples of regression algorithms used to understand relationships in data. Linear regression is used to predict the value of a dependent variable based on the value of an independent variable. Logistic regression can be used when the dependent variable is binary in nature: A or B. For example, a linear regression algorithm could be trained to predict a salesperson’s annual sales (the dependent variable) based on its relationship to the salesperson’s education or years of experience (the independent variables.) Another type of regression algorithm called a support vector machine is useful when dependent variables are more difficult to classify.
  • Decision trees: Decision trees use classified data to make recommendations based on a set of decision rules. For example, a decision tree that recommends betting on a particular horse to win, place, or show could use data about the horse (e.g., age, winning percentage, pedigree) and apply rules to those factors to recommend an action or decision.
  • Instance-based algorithms: A good example of an instance-based algorithm is K-Nearest Neighbor or in. It uses classification to estimate how likely a data point is to be a member of one group or another based on its proximity to other data points.

Algorithms for use with unlabeled data include the following:

  • Clustering algorithms: Think of clusters as groups. Clustering focuses on identifying groups of similar records and labeling the records according to the group to which they belong. This is done without prior knowledge about the groups and their characteristics. Types of clustering algorithms include the K-means, TwoStep, and Kohonen clustering.
  • Association algorithms: Association algorithms find patterns and relationships in data and identify frequent ‘if-then relationships called association rules. These are similar to the rules used in data mining.
  • Neural networks: neural network is an algorithm that defines a layered network of calculations featuring an input layer, where data is ingested; at least one hidden layer, where calculations are performed to make different conclusions about input; and an output layer. where each conclusion is assigned a probability. A deep neural network defines a network with multiple hidden layers, each of which successively refines the results of the previous layer. (For more, see the “Deep learning” section below.)

Step 3: Training the algorithm to create the model

Training the algorithm is an iterative process–it involves running variables through the algorithm, comparing the output with the results it should have produced, adjusting weights and biases within the algorithm that might yield a more accurate result, and running the variables again until the algorithm returns the correct result most of the time. The resulting trained, accurate algorithm is the machine learning model—an important distinction to note, because 'algorithm' and 'model' is incorrectly used interchangeably, even by machine learning mavens.

Step 4: Using and improving the model 

The final step is to use the model with new data and, in the best case, for it to improve in accuracy and effectiveness over time. Where the new data comes from will depend on the problem being solved. For example, a machine learning model designed to identify spam will ingest email messages, whereas a machine learning model that drives a robot vacuum cleaner will ingest data resulting from real-world interaction with moved furniture or new objects in the room.

MACHINE LEARNING METHODS


Machine learning methods (also called machine learning styles) fall into three primary categories.

For a deep dive into the differences between these approaches, check out "Supervised vs. Unsupervised Learning: What's the Difference?"

Supervised machine learning            

Supervised machine learning trains itself on a labeled data set. That is, the data is labeled with information that the machine learning model is being built to determine, and that may even be classified in ways the model is supposed to classify data. For example, a computer vision model designed to identify purebred German Shepherd dogs might be trained on a data set of various labeled dog images.

Supervised machine learning requires less training data than other machine learning methods and makes training easier because the results of the model can be compared to actual labeled results. But, properly labeled data is expensive to prepare, and there's the danger of overfitting, or creating a model so closely tied and biased to the training data that it doesn't handle variations in new data accurately.

Learn more about supervised learning.   

Unsupervised machine learning

Unsupervised machine learning ingests unlabeled data—lots and lots of it—and uses algorithms to extract meaningful features needed to label, sort, and classify the data in real-time, without human intervention. Unsupervised learning is less about automating decisions and predictions, and more about identifying patterns and relationships in data that humans would miss. Take spam detection, for example—people generate more email than a team of data scientists could ever hope to label or classify in their lifetimes. An unsupervised learning algorithm can analyze huge volumes of emails and uncover the features and patterns that indicate spam (and keep getting better at flagging spam over time).

Learn more about unsupervised learning.

Semi-supervised learning 

Semi-supervised learning offers a happy medium between supervised and unsupervised learning. During training, it uses a smaller labeled data set to guide classification and feature extraction from a larger, unlabeled data set. Semi-supervised learning can solve the problem of having not enough labeled data (or not being able to afford to label enough data) to train a supervised learning algorithm. 

REINFORCEMENT MACHINE LEARNING

Reinforcement machine learning is a behavioral machine learning model that is similar to supervised learning, but the algorithm isn’t trained using sample data. This model learns as it goes by using trial and error. A sequence of successful outcomes will be reinforced to develop the best recommendation or policy for a given problem.

The IBM Watson®'s system won the Jeopardy! a challenge in 2011 makes a good example. The system used reinforcement learning to decide whether to attempt an answer (or question, as it were), which square to select on the board, and how much to wager—especially on daily doubles.

Learn more about reinforcement learning.    

DEEP LEARNING

Deep learning is a subset of machine learning (all deep learning is machine learning, but not all machine learning is deep learning). Deep learning algorithms define an artificial neural network that is designed to learn the way the human brain learns. Deep learning models require large amounts of data that pass through multiple layers of calculations, applying weights and biases in each successive layer to continually adjust and improve the outcomes.

Deep learning models are typically unsupervised or semi-supervised. Reinforcement learning models can also be deep learning models. Certain types of deep learning models—including convolutional neural networks (CNNs) and recurrent neural networks (RNNs)—are driving progress in areas such as computer vision, natural language processing (including speech recognition), and self-driving cars. 

See the blog post “AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference?” for a closer look at how the different concepts relate.

Learn more about deep learning.   

REAL-WORLD MACHINE LEARNING USE CASES

As noted at the outset, machine learning is everywhere. Here are just a few examples of machine learning you might encounter every day:

  • Digital assistants: Apple Siri, Amazon Alexa, Google Assistant, and other digital assistants are powered by natural language processing (NLP), a machine learning application that enables computers to process text and voice data and 'understand' human language the way people do. Natural language processing also drives voice-driven applications like GPS and speech recognition (speech-to-text) software.
  • Recommendations: Deep learning models drive 'people also liked' and 'just for you' recommendations offered by Amazon, Netflix, Spotify, and other retail, entertainment, travel, job search, and news services.
  • Contextual online advertising: Machine learning and deep learning models can evaluate the content of a web page—not only the topic but nuances like the author's opinion or attitude—and serve up advertisements tailored to the visitor's interests.
  • Chatbots: Chatbots can use a combination of pattern recognition, natural language processing, and deep neural networks to interpret input text and provide suitable responses.
  • Fraud detection: Machine learning regression and classification models have replaced rules-based fraud detection systems, which have a high number of false positives when flagging stolen credit card use and are rarely successful at detecting criminal use of stolen or compromised financial data.
  • Cybersecurity: Machine learning can extract intelligence from incident reports, alerts, blog posts, and more to identify potential threats, advise security analysts, and accelerate the response.
  • Medical image analysis: The types and volume of digital medical imaging data have exploded, leading to more available information for supporting diagnoses but also more opportunity for human error in reading the data. Convolutional neural networks (CNNs), recurrent neural networks (RNNs), and other deep learning models have proven increasingly successful at extracting features and information from medical images to help support accurate diagnoses.
  • Self-driving cars: Self-driving cars require a machine learning tour de force—they must continuously identify objects in the environment around the car, predict how they will change or move, and guide the car around the objects as well as toward the driver's destination. Virtually every form of machine learning and deep learning algorithm mentioned above plays some role in enabling a self-driving automobile.
  • FOR MORE DETAILS AND MORE NEWS LIKE THIS FOLLOW



Comments

Post a Comment

Popular posts from this blog

12 Technology Trends & Ideas Toward The Construction Industry in 2021 [With Examples]

12 Technology Trends & Ideas Toward The Construction Industry in 2021 [With Examples] Accelerations in technology are introducing an extensive digital transformation in the construction industry. Today, there are numerous examples of how technology is bringing a revolution in this industry. Technologies such as  Artificial Intelligence (AI)  to the  Internet of Things (IoT)  are dramatically reshaping construction by driving away long-standing inefficacies, and low productivity In this guide, we’ll discuss how tech trends are transforming the construction business. We will take a look at some construction technology examples and see how they can benefit your construction business. Why is the Traditional Brick-and-Mortar Industry Reshaping? While it is interesting to find the traditional building and construction landscape changing, it is even more interesting to find out the factors that are contributing to the change. The construction industry and technology have several complemen

Everything You Need to Know about Apple Next-gen Processors

APPLE M1X: Everything You Need to Know about Apple Next-gen Processors The Apple M1X processor is rumored to be the second Apple Silicon processor to launch, following on from the original M1 chip. Reports suggest the Apple M1X chip will be housed inside the upcoming  MacBook Pro 2021 , although Apple is yet to confirm the existence of this laptop. A name hasn’t been fixed for the chip that will succeed the  Apple M1 , but Twitter account  LeaksApplePro  has suggested it could be called the Apple M1X. Other sources have reported it could be called the Apple M2 instead. What is the Apple M1X? While the name is up in the air, it’s widely accepted that Apple will soon reveal an  Apple Silicon  processor that’s even more powerful than the impressive M1 found inside the  MacBook Air  and  iMac 2021 . In fact, if rumors are accurate, Apple could leapfrog the likes of AMD and Intel in offering the absolute best CPU performance for a laptop in the same category as the MacBook Pro. Apple is als