I've been fascinated with machine learning for a while, I did follow some tutorials, watch some videos, read some articles but I never find the time to focus on ML and get really good at it!
This article will be the story of my road to become a Machine Learning expert.
Being a Machine Learning expert is one of my goals for the year 2019. My curiosity about machine learning started the late of the year 2017, I did follow some tutorials, participate in some ML hackathons, make some ML projects... so here is what I did so far :
In the year 2017, I was watching Siraj Raval Youtube channel, read articles, tried Udacity online course, Andrew ng tutorials... but to be honest, I was having a hard time to understand things, I was at the beginning of the journey and I didn't know the right path to start!
This changed after I followed the following course in Udemy: Machine Learning A-Z™: Hands-On Python & R In Data Science, this course had the theoretical side of things and some practice in python! After finishing it, I was able to understand most of Siraj videos, I learned how to clean data, what is a model and what are the different models out there. I also got an idea about deep learning even that it was just an initiation (I didn't fully understand DL back then because it was a lot of information but I had the goal to get more into Deep Learning).
In the year 2018, I didn't really have time to follow more tutorials, I was busy with university, university club, hackathons,... But I kept being curious about Machine learning, I read some articles, watch some videos related to ML and I even decided to start my own youtube channel to teach Machine learning! The best way to learn something is by teaching! :) My main focus was to learn while presenting information but after 4/5 videos I stopped, making videos really take a lot of time and it just didn't work (I am planning to go back to making videos this year but it's just an idea).
In Mars 2018 I did participate in my first Kaggle competition: https://www.kaggle.com/c/deephack, the night before the event I tried the Titanic problem in Kaggle and the next day I was trying to figure out how to make a deep neural network model :D
My first submission was with a Random forest model and instead of trying to improve the data or improve the model, I was focusing on making an RNN when I had no Idea about NN (One of the 3 top teams got their score with a similar model without neural network and they won the prize of going to the F8 in the US), Well it was an amazing experience!
After the event, I was insisting on understanding how to make an RNN, how to solve that kind of problem, I did follow some youtube tutorials but I didn't understand a thing and I got busy with other things like school and events so yeah...
I tried to apply machine learning in one of my school projects so I made this [Machine Learning] My first convolutional neural network in python, I can't say I made it on my own from scratch but it gave me an idea about CNN!
In November 2018 I did participate in my second Kaggle competition: https://www.kaggle.com/c/ai-hackathon, this one was 24 hours, not sure why the ranks aren't shown now but I am satisfied with the rank we got, I was ranked 12 from 25 team and I must say some teams did share their predicted results with other teams of the same school so it wasn't fair, we could have got at least the 6th place, but anyway it was a fun experience ! I should say that we got lucky with the neural network model so again I felt I need to improve my knowledge ! I need to take ML more serious.
In December 2018, I made a machine learning workshop at my university and it was basically an introduction to Kaggle and the Titanic problem.
This is the road to my journey:
2019 is my year to become a Machine Learning expert! I mentioned what I did in the year 2017 and 2018 just to show my current level in Machine learning, I am still a beginner, from a scale of 1 to 10 I can say my level is 2/10, My goal this year is to get to the level 7/10 or above. I will be focusing more on Deep Learning, but I will also improve my knowledge in Machine learning and participate in more Kaggle competitions.
My internship for my end-of-studies is related to Machine learning so I hope this would help.
Here are the tutorials I have followed since the say I started writing this blog:
Tutorial 1 (January 2019): Deep Learning: Face Recognition
For my internship, I will make a face recognition system that's why I started with this tutorial. I learned the steps for making a face recognition system, it was completely different than object recognition and even that the library the instructor used (based on Dlib) made the implementation really easy, I understood the theory behind it.
Tutorial 2 (January 2019): Deep Learning: Image Recognition
The instructor of the first tutorial was good so I did move to his next course and I tried to remember how CNN works. I felt confident that I can build my own CNN using Keras without any help! I did learn how Image recognition systems work and I understood the neural networks even more.
Tutorial 3 (February 2019): Building Deep Learning Applications with Keras 2.0
I wanted to learn more about Keras and feel more confident using it, this tutorial was quite short, it was from the same instructor and I understood Keras a little more.
Tutorial 4 (February 2019): Artificial Intelligence Masterclass
This course is somehow advanced, it teaches how to make a Hybrid AI Models I guess it's like the IA that beat professional players in Dota2, I picked this course because it has a lot of sections that interest me like: ANN, CNN, RNN, AutoEncoders, Reinforcement Learning...
I may not follow the advanced levels of this tutorial but I will start with the sections I mentioned. The instructor is somehow good, he is the one who made the past tutorial I mentioned Machine Learning A-Z™: Hands-On Python & R In Data Science and any beginner can follow him.
Step 1 - Artificial Neural Network
This section was the same one I watched in Machine Learning A-Z course, I did watch again anyway to remember how neural networks work, There are a lot of details I remembered and I liked how the instructor made things simple for beginners like no advanced mathematics, I also felt the use of the course optimization I studied at university (gradient descent...)
Step 2 - Convolutional Neural Network
This section was deeper than the Lynda course, Again it's the same course I watched in Machine Learning A-Z, but I remembered a lot of things, many things made sense especially after watching the Lynda course, and I guess at this point I want to learn about other neural networks! enough about CNN.
Step 3 - AutoEncoder
I always heard the term of AutoEncoder but I didn't have an idea what it means. In this section, I was able to understand what an AutoEncoder is, it was nice having an idea about it but it's something related to unsupervised learning and my goal is to get good at supervised learning first! Well, I did watch this entire section, it was good knowing how AutoEncoders could be used in recommendation systems but I will jump the RNN section next.
Step 4 - Variational AutoEncoder
This section was short so I decided to watch it before moving to RNN, Variational AutoEncoder is the way to allow IA to dream u_u it compresses the version of the original environment, extract the feature that really matters and create slightly varying "dream" every time just wow, can't wait to get more into unsupervised learning in the feature, this is where all the cool projects happen.
Step 6 - Recurrent Neural Network
I already had an idea about RNN, this section just made me remember information and learn new ones, This Sunspring | A Sci-Fi Short Film script generated by a neural network blew my mind! especially the generated music, It made me curious to make my own music generation system.
Well the concept of LSTM was a quite difficult, I tried my best to understand it but it wasn't really clear, let's just say LSTM is the way to solve the problem of Vanishing Gradient.
To sum up, this tutorial is nice! it shows the fundamental of Deep Learning without getting deep into mathematics, Actually, there is no hard mathematics equation at all. The instructor made everything clear and I felt no need to complete the rest of the sections, I am not ready for making a Hybrid AI Models yet. I need to move to some practical application to understanding things more. If someone wants to take the same path I am taking I would recommend following this Tuto: Deep Learning A-Z™: Hands-On Artificial Neural Networks instead of the one I followed, that one is more organized and it basically has the same sections.
Tutorial 5 (February 2019): Machine Learning with XGBoost Using Scikit-learn in Python
Before implementing ANN, RNN... I need another model to compare things, XGBoost is the most used model for winning competitions in Kaggle, I always wanted to learn it.
I only found Pluralsight as the only platform that has a complete tutorial about XGBoost, after completing this tutorial, it's not as I expected! it only gives you the basic of XGBoost and I guess that's why there are no tutorials about XGBoost, it's not that complicated and this tutorial has only one video that talks about XGBoost so not sure if it's worth it.
Tutorial 6 (February 2019): Deep Learning with Keras
This tutorial gave me the fundamentals of Keras, The RNN part was a little disappointing but the other parts were quite nice. Next step I will try to do some practice in Kaggle... and try to get more into cleaning data.
Tutorial 7 (February 2019): Preprocessing for Machine Learning in Python
Data pre-processing is the most important phase in Machine learning, This tutorial shows the basic of data pre-processing, but I find out that it's better to learn data pre-processing by practice.
Practice Time:
Problem 1: Using a Keras Long Short-Term Memory (LSTM) Model to Predict Stock Prices
Take an idea about how to implement RNN with Keras.
Problem 2: Digit Recognizer
Implement CNN with my previous knowledge.
Problem 3: House Prices: Advanced Regression Techniques
Trying Xgboost (Didn't extract the features I used the help of another Kernel)
Problem 4: PetFinder.my Adoption Prediction
Trying a recent Kaggle competition and try to apply my knowledge in this problem. (Trying different models: Random forest, XGBoost, ANN)
---->18/02/2019: Getting reworded in the internship for winning the first-week workshop in a theme of hackathon between interns.
Tutorial 8 (February 2019): Machine Learning A-Z™: Hands-On Python & R In Data Science
Re-watch this tutorial to remember the classic models of machine learning, and refresh my memory.
-----------------------------------------------------
(June 2019) In the rest of this blog, I will sum up what I did in the past few months because I got busy with life and I wasn't able to update this blog. Machine Learning A-Z was the last tutorial I followed, after that I started working on Kaggle problems, my internship, etc.
In Mars, I worked with LSTM using keras, I tried to deep my knowledge in computer vision so I tried transform learning with Resnet, etc.
I also made a machine learning workshop at my old university.
This article will be the story of my road to become a Machine Learning expert.
Being a Machine Learning expert is one of my goals for the year 2019. My curiosity about machine learning started the late of the year 2017, I did follow some tutorials, participate in some ML hackathons, make some ML projects... so here is what I did so far :
In the year 2017, I was watching Siraj Raval Youtube channel, read articles, tried Udacity online course, Andrew ng tutorials... but to be honest, I was having a hard time to understand things, I was at the beginning of the journey and I didn't know the right path to start!
This changed after I followed the following course in Udemy: Machine Learning A-Z™: Hands-On Python & R In Data Science, this course had the theoretical side of things and some practice in python! After finishing it, I was able to understand most of Siraj videos, I learned how to clean data, what is a model and what are the different models out there. I also got an idea about deep learning even that it was just an initiation (I didn't fully understand DL back then because it was a lot of information but I had the goal to get more into Deep Learning).
In the year 2018, I didn't really have time to follow more tutorials, I was busy with university, university club, hackathons,... But I kept being curious about Machine learning, I read some articles, watch some videos related to ML and I even decided to start my own youtube channel to teach Machine learning! The best way to learn something is by teaching! :) My main focus was to learn while presenting information but after 4/5 videos I stopped, making videos really take a lot of time and it just didn't work (I am planning to go back to making videos this year but it's just an idea).
In Mars 2018 I did participate in my first Kaggle competition: https://www.kaggle.com/c/deephack, the night before the event I tried the Titanic problem in Kaggle and the next day I was trying to figure out how to make a deep neural network model :D
My first submission was with a Random forest model and instead of trying to improve the data or improve the model, I was focusing on making an RNN when I had no Idea about NN (One of the 3 top teams got their score with a similar model without neural network and they won the prize of going to the F8 in the US), Well it was an amazing experience!
After the event, I was insisting on understanding how to make an RNN, how to solve that kind of problem, I did follow some youtube tutorials but I didn't understand a thing and I got busy with other things like school and events so yeah...
I tried to apply machine learning in one of my school projects so I made this [Machine Learning] My first convolutional neural network in python, I can't say I made it on my own from scratch but it gave me an idea about CNN!
In November 2018 I did participate in my second Kaggle competition: https://www.kaggle.com/c/ai-hackathon, this one was 24 hours, not sure why the ranks aren't shown now but I am satisfied with the rank we got, I was ranked 12 from 25 team and I must say some teams did share their predicted results with other teams of the same school so it wasn't fair, we could have got at least the 6th place, but anyway it was a fun experience ! I should say that we got lucky with the neural network model so again I felt I need to improve my knowledge ! I need to take ML more serious.
In December 2018, I made a machine learning workshop at my university and it was basically an introduction to Kaggle and the Titanic problem.
This is the road to my journey:
2019 is my year to become a Machine Learning expert! I mentioned what I did in the year 2017 and 2018 just to show my current level in Machine learning, I am still a beginner, from a scale of 1 to 10 I can say my level is 2/10, My goal this year is to get to the level 7/10 or above. I will be focusing more on Deep Learning, but I will also improve my knowledge in Machine learning and participate in more Kaggle competitions.
My internship for my end-of-studies is related to Machine learning so I hope this would help.
Here are the tutorials I have followed since the say I started writing this blog:
Tutorial 1 (January 2019): Deep Learning: Face Recognition
For my internship, I will make a face recognition system that's why I started with this tutorial. I learned the steps for making a face recognition system, it was completely different than object recognition and even that the library the instructor used (based on Dlib) made the implementation really easy, I understood the theory behind it.
Tutorial 2 (January 2019): Deep Learning: Image Recognition
The instructor of the first tutorial was good so I did move to his next course and I tried to remember how CNN works. I felt confident that I can build my own CNN using Keras without any help! I did learn how Image recognition systems work and I understood the neural networks even more.
Tutorial 3 (February 2019): Building Deep Learning Applications with Keras 2.0
I wanted to learn more about Keras and feel more confident using it, this tutorial was quite short, it was from the same instructor and I understood Keras a little more.
Tutorial 4 (February 2019): Artificial Intelligence Masterclass
This course is somehow advanced, it teaches how to make a Hybrid AI Models I guess it's like the IA that beat professional players in Dota2, I picked this course because it has a lot of sections that interest me like: ANN, CNN, RNN, AutoEncoders, Reinforcement Learning...
I may not follow the advanced levels of this tutorial but I will start with the sections I mentioned. The instructor is somehow good, he is the one who made the past tutorial I mentioned Machine Learning A-Z™: Hands-On Python & R In Data Science and any beginner can follow him.
Step 1 - Artificial Neural Network
This section was the same one I watched in Machine Learning A-Z course, I did watch again anyway to remember how neural networks work, There are a lot of details I remembered and I liked how the instructor made things simple for beginners like no advanced mathematics, I also felt the use of the course optimization I studied at university (gradient descent...)
Step 2 - Convolutional Neural Network
This section was deeper than the Lynda course, Again it's the same course I watched in Machine Learning A-Z, but I remembered a lot of things, many things made sense especially after watching the Lynda course, and I guess at this point I want to learn about other neural networks! enough about CNN.
Step 3 - AutoEncoder
I always heard the term of AutoEncoder but I didn't have an idea what it means. In this section, I was able to understand what an AutoEncoder is, it was nice having an idea about it but it's something related to unsupervised learning and my goal is to get good at supervised learning first! Well, I did watch this entire section, it was good knowing how AutoEncoders could be used in recommendation systems but I will jump the RNN section next.
Step 4 - Variational AutoEncoder
This section was short so I decided to watch it before moving to RNN, Variational AutoEncoder is the way to allow IA to dream u_u it compresses the version of the original environment, extract the feature that really matters and create slightly varying "dream" every time just wow, can't wait to get more into unsupervised learning in the feature, this is where all the cool projects happen.
Step 6 - Recurrent Neural Network
I already had an idea about RNN, this section just made me remember information and learn new ones, This Sunspring | A Sci-Fi Short Film script generated by a neural network blew my mind! especially the generated music, It made me curious to make my own music generation system.
Well the concept of LSTM was a quite difficult, I tried my best to understand it but it wasn't really clear, let's just say LSTM is the way to solve the problem of Vanishing Gradient.
To sum up, this tutorial is nice! it shows the fundamental of Deep Learning without getting deep into mathematics, Actually, there is no hard mathematics equation at all. The instructor made everything clear and I felt no need to complete the rest of the sections, I am not ready for making a Hybrid AI Models yet. I need to move to some practical application to understanding things more. If someone wants to take the same path I am taking I would recommend following this Tuto: Deep Learning A-Z™: Hands-On Artificial Neural Networks instead of the one I followed, that one is more organized and it basically has the same sections.
Tutorial 5 (February 2019): Machine Learning with XGBoost Using Scikit-learn in Python
Before implementing ANN, RNN... I need another model to compare things, XGBoost is the most used model for winning competitions in Kaggle, I always wanted to learn it.
I only found Pluralsight as the only platform that has a complete tutorial about XGBoost, after completing this tutorial, it's not as I expected! it only gives you the basic of XGBoost and I guess that's why there are no tutorials about XGBoost, it's not that complicated and this tutorial has only one video that talks about XGBoost so not sure if it's worth it.
Tutorial 6 (February 2019): Deep Learning with Keras
This tutorial gave me the fundamentals of Keras, The RNN part was a little disappointing but the other parts were quite nice. Next step I will try to do some practice in Kaggle... and try to get more into cleaning data.
Tutorial 7 (February 2019): Preprocessing for Machine Learning in Python
Data pre-processing is the most important phase in Machine learning, This tutorial shows the basic of data pre-processing, but I find out that it's better to learn data pre-processing by practice.
Practice Time:
Problem 1: Using a Keras Long Short-Term Memory (LSTM) Model to Predict Stock Prices
Take an idea about how to implement RNN with Keras.
Problem 2: Digit Recognizer
Implement CNN with my previous knowledge.
Problem 3: House Prices: Advanced Regression Techniques
Trying Xgboost (Didn't extract the features I used the help of another Kernel)
Problem 4: PetFinder.my Adoption Prediction
Trying a recent Kaggle competition and try to apply my knowledge in this problem. (Trying different models: Random forest, XGBoost, ANN)
---->18/02/2019: Getting reworded in the internship for winning the first-week workshop in a theme of hackathon between interns.
Tutorial 8 (February 2019): Machine Learning A-Z™: Hands-On Python & R In Data Science
Re-watch this tutorial to remember the classic models of machine learning, and refresh my memory.
-----------------------------------------------------
(June 2019) In the rest of this blog, I will sum up what I did in the past few months because I got busy with life and I wasn't able to update this blog. Machine Learning A-Z was the last tutorial I followed, after that I started working on Kaggle problems, my internship, etc.
In Mars, I worked with LSTM using keras, I tried to deep my knowledge in computer vision so I tried transform learning with Resnet, etc.
I also made a machine learning workshop at my old university.
In April, I focused more on my internship but I did attend a machine learning conference Indabax Tunisia which was really amazing, I participated in 2 data science hackathons one of them was working with the data of an insurance company and the other was a private Kaggle competition about hotel reservation, which I did win a prize in this hackathon. At this point, I really felt that my knowledge about machine learning has improved than it used to be !
In May, I kept working with my internship subject, I implemented a deep learning research paper (Abnormal behavior detection with videos as input) using Keras and basically the entire month I was busy with my end of study project.
Now t's been 2 weeks in June, soon I will validate my end-of-studies internship and I guess this is the end of my journey.
I had fun learning about machine learning, probably there are still more things to learn, but for now I guess I had enough knowledge, I got to the level I want which is 7/10 like I said at the beginning of this post.
I may consider doing a PHD in this field but for my professional career, I will go back to software engineering and maybe use machine learning along the way but I doubt if I become a data scientist, I realized a big part of the data scientist job is to work with data and try to understand it, etc. for me, I am more into building things.
In September 2019 I participated in the biggest AI hackathon in Tunisia and in Africa, It was the best hackathon I ever attended.
I am so happy about this hackathon because I won the first place in one of the 6 challenges! The prize is a sponsored trip to Google I/O next year, so I am so excited! I am so happy that my Machine learning level got this far.
I may not consider doing a job as a data scientist this period but I will always be curious about ML.
Now t's been 2 weeks in June, soon I will validate my end-of-studies internship and I guess this is the end of my journey.
I had fun learning about machine learning, probably there are still more things to learn, but for now I guess I had enough knowledge, I got to the level I want which is 7/10 like I said at the beginning of this post.
I may consider doing a PHD in this field but for my professional career, I will go back to software engineering and maybe use machine learning along the way but I doubt if I become a data scientist, I realized a big part of the data scientist job is to work with data and try to understand it, etc. for me, I am more into building things.
In September 2019 I participated in the biggest AI hackathon in Tunisia and in Africa, It was the best hackathon I ever attended.
I am so happy about this hackathon because I won the first place in one of the 6 challenges! The prize is a sponsored trip to Google I/O next year, so I am so excited! I am so happy that my Machine learning level got this far.
I may not consider doing a job as a data scientist this period but I will always be curious about ML.
this is the nice article !!once refer digital marketing course at 360DigiTMG. it is the best institute for doing certified courses.
ReplyDeletenice post, thank you for sharing
ReplyDeleteuipath training