- About The Coursera
- About Unsupervised Learning, Recommenders, Reinforcement Learning Course
- Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers
- More About This Course

Hello Peers, Today we are going to share** all week’s assessment and quiz answers** of the **Unsupervised Learning, Recommenders, Reinforcement Learning** course launched by **Coursera **totally **free of cost**✅✅✅. This is a** certification course** for every interested student.

In case you didn’t find this course for free, then you can** apply for financial ads** to get this course for totally free.

*Check out this article – “How to Apply for Financial Ads?”*

**About The Coursera**

**Coursera**, **India’s biggest learning platform** launched millions of free courses for students daily. These courses are from various recognized universities, where industry experts and professors teach in a very well manner and in a more understandable way.

Here, you will find **Unsupervised Learning, Recommenders, Reinforcement Learning****Exam Answers** in **Bold Color** which are given below.

These answers are updated recently and are **100% correct✅** answers of all week, assessment, and final exam answers of **Unsupervised Learning, Recommenders, Reinforcement Learning** from **Coursera Free Certification Course.**

Use “Ctrl+F” To Find Any Questions Answer. & For Mobile User, You Just Need To Click On Three dots In Your Browser & You Will Get A “Find” Option There. Use These Option to Get Any Random Questions Answer.

**About Unsupervised Learning, Recommenders, Reinforcement Learning** **Course**

The Machine Learning Specialization is a foundational online programme that was made by Stanford Online and DeepLearning.AI. In this easy-to-use programme, you’ll learn the basics of machine learning and how to use these techniques to build AI apps that work in the real world.

**Course Apply Link – Unsupervised Learning, Recommenders, Reinforcement Learning**

**Unsupervised Learning, Recommenders, Reinforcement Learning** **Quiz Answers**

### Week 01: Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers

#### Quiz 01 : Clustering

Q1. Which of these best describes unsupervised learning?

- A form of machine learning that finds patterns using labeled data (x, y)
- A form of machine learning that finds patterns without using a cost function.
- A form of machine learning that finds patterns in data using only labels (y) but without any inputs (x) .
- A form of machine learning that finds patterns using unlabeled data (x).

Q2. Which of these statements are true about K-means? Check all that apply.

- The number of cluster assignment variables c^{(i)}
*c*(*i*) is equal to the number of training examples. - If each example x is a vector of 5 numbers, then each cluster centroid \mu_k
*μk* is also going to be a vector of 5 numbers. - If you are running K-means with K=3
*K*=3 clusters, then each c^{(i)}*c*(*i*) should be 1, 2, or 3. - The number of cluster centroids \mu_k
*μk* is equal to the number of examples.

Q3. You run K-means 100 times with different initializations. How should you pick from the 100 resulting solutions?

- Pick randomly — that was the point of random initialization.
- Pick the last one (i.e., the 100th random initialization) because K-means always improves over time
- Average all 100 solutions together.
- Pick the one with the lowest cost J
*J*

Q4. You run K-means and compute the value of the cost function J(c^{(1)}, …, c^{(m)}, \mu_1, …, \mu_K)*J*(*c*(1),…,*c*(*m*),*μ*1,…,*μK*) after each iteration. Which of these statements should be true?

- The cost can be greater or smaller than the cost in the previous iteration, but it decreases in the long run.
- There is no cost function for the K-means algorithm.
- Because K-means tries to maximize cost, the cost is always greater than or equal to the cost in the previous iteration.
- The cost will either decrease or stay the same after each iteration. .

Q5. In K-means, the elbow method is a method to

- Choose the best number of samples in the dataset
- Choose the number of clusters K
- Choose the maximum number of examples for each cluster
- Choose the best random initialization

#### Quiz 02 : Anomaly detection

Q1. You are building a system to detect if computers in a data center are malfunctioning. You have 10,000 data points of computers functioning well, and no data from computers malfunctioning. What type of algorithm should you use?

- Anomaly detection
- Supervised learning

Q2. You are building a system to detect if computers in a data center are malfunctioning. You have 10,000 data points of computers functioning well, and 10,000 data points of computers malfunctioning. What type of algorithm should you use?

- Anomaly detection
- Supervised learning

Q3. Say you have 5,000 examples of normal airplane engines, and 15 examples of anomalous engines. How would you use the 15 examples of anomalous engines to evaluate your anomaly detection algorithm?

- Because you have data of both normal and anomalous engines, don’t use anomaly detection. Use supervised learning instead.
- Use it during training by fitting one Gaussian model to the normal engines, and a different Gaussian model to the anomalous engines.
- You cannot evaluate an anomaly detection algorithm because it is an unsupervised learning algorithm.
- Put the data of anomalous engines (together with some normal engines) in the cross-validation and/or test sets to measure if the learned model can correctly detect anomalous engines.

Q4. Anomaly detection flags a new input x*x* as an anomaly if p(x) < \epsilon*p*(*x*)<*ϵ*. If we reduce the value of \epsilon*ϵ*, what happens?

- The algorithm is more likely to classify new examples as an anomaly.
- The algorithm is less likely to classify new examples as an anomaly.
- The algorithm is more likely to classify some examples as an anomaly, and less likely to classify some examples as an anomaly. It depends on the example x
*x*. - The algorithm will automatically choose parameters \mu
*μ*and \sigma*σ*to decrease p(x)*p*(*x*) and compensate.

Q5. You are monitoring the temperature and vibration intensity on newly manufactured aircraft engines. You have measured 100 engines and fit the Gaussian model described in the video lectures to the data. The 100 examples and the resulting distributions are shown in the figure below.

The measurements on the latest engine you are testing have a temperature of 17.5 and a vibration intensity of 48. These are shown in magenta on the figure below. What is the probability of an engine having these two measurements?

- 0.0738 + 0.02288 = 0.0966
- 0.0738 * 0.02288 = 0.00169
- 17.5 + 48 = 65.5
- 17.5 * 48 = 840

### Week 02: Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers

#### Quiz 01 : Collaborative Filtering

Q1. You have the following table of movie ratings:

Movie | Elissa | Zach | Barry | Terry |

Football Forever | 5 | 4 | 3 | ? |

Pies, Pies, Pies | 1 | ? | 5 | 4 |

Linear Algebra Live | 4 | 5 | ? | 1 |

Refer to the table above for question 1 and 2. Assume numbering starts at 1 for this quiz, so the rating for Football Forever by Elissa is at (1,1)

What is the value of n*u*

Q2. What is the value of r(2,2)*r*(2,2)

Q3. In which of the following situations will a collaborative filtering system be the most appropriate learning algorithm (compared to linear or logistic regression)?

- You run an online bookstore and collect the ratings of many users. You want to use this to identify what books are “similar” to each other (i.e., if a user likes a certain book, what are other books that they might also like?)
- You’re an artist and hand-paint portraits for your clients. Each client gets a different portrait (of themselves) and gives you 1-5 star rating feedback, and each client purchases at most 1 portrait. You’d like to predict what rating your next customer will give you.
- You manage an online bookstore and you have the book ratings from many users. You want to learn to predict the expected sales volume (number of books sold) as a function of the average rating of a book.
- You subscribe to an online video streaming service, and are not satisfied with their movie suggestions. You download all your viewing for the last 10 years and rate each item. You assign each item a genre. Using your ratings and genre assignment, you learn to predict how you will rate new movies based on the genre.

Q4. For recommender systems with binary labels y, which of these are reasonable ways for defining when y*y* should be 1 for a given user j*j* and item i*i*? (Check all that apply.)

- y
*y*is 1 if user j*j*purchases item i*i*(after being shown the item) - y
*y*is 1 if user j*j*has been shown item i*i*by the recommendation engine - y
*y*is 1 if user j*j*fav/likes/clicks on item i*i*(after being shown the item) - y
*y*is 1 if user j*j*has not yet been shown item i*i*by the recommendation engine

#### Quiz 02 : Recommender systems implementation

Q1. Lecture described using ‘mean normalization’ to do feature scaling of the ratings. What equation below best describes this algorithm?

*ynorm*(*i*,*j*)*μiσ*2*i*=*y*(*i*,*j*)−*μiσi*where=1∑*jr*(*i*,*j*)∑*j*:*r*(*i*,*j*)=1*y*(*i*,*j*)=1∑*jr*(*i*,*j*)∑*j*:*r*(*i*,*j*)=1(*y*(*i*,*j*)−*μj*)2*ynorm*(*i*,*j*)*μi*=*y*(*i*,*j*)−*μi*where=1∑*jr*(*i*,*j*)∑*j*:*r*(*i*,*j*)=1*y*(*i*,*j*)*ynorm*(*i*,*j*)*μi*=*y*(*i*,*j*)−*μimaxi*−*mini*where=1∑*jr*(*i*,*j*)∑*j*:*r*(*i*,*j*)=1*y*(*i*,*j*)

Q2. The implementation of collaborative filtering utilized a custom training loop in TensorFlow. Is it true that TensorFlow always requires a custom training loop?

- Yes. TensorFlow gains flexibility by providing the user primitive operations they can combine in many ways.
- No: TensorFlow provides simplified training operations for some applications.

Q3. Once a model is trained, the ‘distance’ between features vectors gives an indication of how similar items are.

The squared distance between the two vectors \mathbf{x}^{(k)}**x**(*k*) and \mathbf{x}^{(i)}**x**(*i*) is:

distance = \left\Vert \mathbf{x^{(k)}} – \mathbf{x^{(i)}} \right\Vert^2 = \sum_{l=1}^{n}(x_l^{(k)} -x_l^{(i)})^2*d**i**s**t**a**n**c**e*=∥∥∥**x**(**k**)−**x**(**i**)∥∥∥2=∑*l*=1*n*(*x**l*(*k*)−*x**l*(*i*))2

Using the table below, find the closest item to the movie “Pies, Pies, Pies”.

Movie | User 1 | … | User n | x_0x0 | x_1x1 | x_2x2 |

Pastries for Supper | 2.0 | 2.0 | 1.0 | |||

Pies, Pies, Pies | 2.0 | 3.0 | 4.0 | |||

Pies and You | 5.0 | 3.0 | 4.0 |

- Pastries for Supper
- Pies and You

Q4. Which of these is an example of the cold start problem? (Check all that apply.)

- A recommendation system is unable to give accurate rating predictions for a new product that no users have rated.
- A recommendation system takes so long to train that users get bored and leave.
- A recommendation system is unable to give accurate rating predictions for a new user that has rated few products.
- A recommendation system is so computationally expensive that it causes your computer CPU to heat up, causing your computer to need to be cooled down and restarted.

#### Quiz 03 : Content-based filtering

Q1. Vector x_u*xu* and vector x_m*xm* must be of the same dimension, where x_u*xu* is the input features vector for a user (age, gender, etc.) x_m*xm* is the input features vector for a movie (year, genre, etc.) True or false?

- True
- False

Q2. If we find that two movies, i*i* and k*k*, have vectors v_m^{(i)}*vm*(*i*) and v_m^{(k)}*vm*(*k*) that are similar to each other (i.e., ||v_m^{(i)} – v_m^{(k)}||∣∣*vm*(*i*)−*vm*(*k*)∣∣ is small), then which of the following is likely to be true? Pick the best answer.

- We should recommend to users one of these two movies, but not both.
- The two movies are similar to each other and will be liked by similar users.
- A user that has watched one of these two movies has probably watched the other as well.
- The two movies are very dissimilar.

Q3. Which of the following neural network configurations are valid for a content based filtering application? Please note carefully the dimensions of the neural network indicated in the diagram. Check all the options that apply:

- .

- The user vector v_u is 32 dimensional, and the item vector v_m is 64 dimensional

- The user and the item networks have different architectures

- The user and item networks have 64 dimensional v_u and v_m vector respectively

Both the user and the item networks have the same architecture

Q4. You have built a recommendation system to retrieve musical pieces from a large database of music, and have an algorithm that uses separate retrieval and ranking steps. If you modify the algorithm to add more musical pieces to the retrieved list (i.e., the retrieval step returns more items), which of these are likely to happen? Check all that apply

- The quality of recommendations made to users should stay the same or improve.
- The system’s response time might increase (i.e., users have to wait longer to get recommendations)
- The system’s response time might decrease (i.e., users get recommendations more quickly)
- The quality of recommendations made to users should stay the same or worsen.

Q5. To speed up the response time of your recommendation system, you can pre-compute the vectors v_m for all the items you might recommend. This can be done even before a user logs in to your website and even before you know the x_u*xu* or v_u*vu* vector. True/False?

- True
- False

### Week 03: Unsupervised Learning, Recommenders, Reinforcement Learning Quiz Answers

#### Quiz 01 : Reinforcement learning introduction

Q1. You are using reinforcement learning to control a four legged robot. The position of the robot would be its _____.

- state
- return
- action
- reward

Q2. You are controlling a Mars rover. You will be very very happy if it gets to state 1 (significant scientific discovery), slightly happy if it gets to state 2 (small scientific discovery), and unhappy if it gets to state 3 (rover is permanently damaged). To reflect this, choose a reward function so that:

- R(1) < R(2) < R(3), where R(1) and R(2) are negative and R(3) is positive.
- R(1) > R(2) > R(3), where R(1), R(2) and R(3) are positive.
- R(1) > R(2) > R(3), where R(1) and R(2) are positive and R(3) is negative.
- R(1) > R(2) > R(3), where R(1), R(2) and R(3) are negative.

Q3. You are using reinforcement learning to fly a helicopter. Using a discount factor of 0.75, your helicopter starts in some state and receives rewards -100 on the first step, -100 on the second step, and 1000 on the third and final step (where it has reached a terminal state). What is the return?

- -100 – 0.25*100 + 0.25^2*1000
- -100 – 0.75*100 + 0.75^2*1000
- -0.25*100 – 0.25^2*100 + 0.25^3*1000
- -0.75*100 – 0.75^2*100 + 0.75^3*1000

Q4. Given the rewards and actions below, compute the return from state 3 with a discount factor of \gamma = 0.25*γ*=0.25.

- 0
- 0.39
- 6.25
- 25

#### Quiz 02 : State-action value function

Q1. Which of the following accurately describes the state-action value function Q(s,a)*Q*(*s*,*a*)?

- It is the return if you start from state s
*s*, take action a*a*(once), then behave optimally after that. - It is the return if you start from state s
*s*and repeatedly take action a*a*. - It is the return if you start from state s
*s*and behave optimally. - It is the immediate reward if you start from state s
*s*and take action a*a*(once).

Q2. You are controlling a robot that has 3 actions: ← (left), → (right) and STOP. From a given state s*s*, you have computed Q(s, ←) = -10, Q(s, →) = -20, Q(s, STOP) = 0.

What is the optimal action to take in state s*s*?

- STOP
- ← (left)
- → (right)
- Impossible to tell

Q3. For this problem, \gamma = 0.25*γ*=0.25. The diagram below shows the return and the optimal action from each state. Please compute Q(5, ←).

- 0.625
- 0.391
- 1.25
- 2.5

#### Quiz 03 : Continuous state spaces

Q1. The Lunar Lander is a continuous state Markov Decision Process (MDP) because:

- The state contains numbers such as position and velocity that are continuous valued.
- The reward contains numbers that are continuous valued
- The state has multiple numbers rather than only a single number (such as position in the x
*x*-direction) - The state-action value Q(s,a)
*Q*(*s*,*a*) function outputs continuous valued numbers

Q2. In the learning algorithm described in the videos, we repeatedly create an artificial training set to which we apply supervised learning where the input x = (s,a)*x*=(*s*,*a*) and the target, constructed using Bellman’s equations, is y = _____?

- y=R(s’)
*y*=*R*(*s*′) where s’*s*′ is the state you get to after taking action a*a*in state s - y=\max\limits_{a’} Q(s’,a’)
*y*=*a*′max*Q*(*s*′,*a*′) where s’*s*′ is the state you get to after taking action a*a*in state s *y*=*R*(*s*)+*γa*′max*Q*(*s*′,*a*′) where s’*s*′ is the state you get to after taking action a*a*in state s- y=R(s)
*y*=*R*(*s*)

Q3. You have reached the final practice quiz of this class! What does that mean? (Please check all the answers, because all of them are correct!)

- What an accomplishment — you made it!
- Andrew sends his heartfelt congratulations to you!
- The DeepLearning.AI and Stanford Online teams would like to give you a round of applause!
- You deserve to celebrate!

**More About This Course**

In the third course of the Specialization in Machine Learning, you will:

• Use techniques for unsupervised learning, such as clustering and anomaly detection, for unsupervised learning.

• Build recommender systems using a content-based deep learning method and a collaborative filtering method.

• Build a deep reinforcement learning model.

The Machine Learning Specialization is a foundational online programme that was made by Stanford Online and DeepLearning.AI. In this easy-to-use programme, you’ll learn the basics of machine learning and how to use these techniques to build AI apps that work in the real world.

Andrew Ng teaches this Specialization. He is a visionary in the field of AI. He has led important research at Stanford University and done groundbreaking work at Google Brain, Baidu, and Landing. AI.

This 3-course Specialization is an updated and expanded version of Andrew’s pioneering Machine Learning course, which has been taken by more than 4.8 million people since it was first offered in 2012. It has a rating of 4.9 out of 5 stars.

It gives a broad introduction to modern machine learning, including supervised learning (multiple linear regression, logistic regression, neural networks, and decision trees), unsupervised learning (clustering, dimensionality reduction, and recommender systems), and some of the best practices used in Silicon Valley for AI and machine learning innovation (evaluating and tuning models, taking a data-centric approach to improving performance, and more.)

By the end of this Specialization, you will have mastered key ideas and learned the skills you need to use machine learning quickly and effectively to solve hard problems in the real world. The new Machine Learning Specialization is the best place to start if you want to get into AI or build a career in machine learning.

**WHAT YOU’LL FIND OUT**

Use techniques for unsupervised learning, such as clustering and anomaly detection, for unsupervised learning.

Use a collaborative filtering method and a content-based deep learning method to build recommender systems.

Build a strong model of reinforcement learning

**SKILLS YOU WILL GAIN**

- Collaborative Filtering
- Unsupervised Learning
- Recommender Systems
- Reinforcement Learning
- Anomaly Detection

**Conclusion**

Hopefully, this article will be useful for you to find all the **Week, final assessment, and Peer Graded Assessment Answers of Unsupervised Learning, Recommenders, Reinforcement Learning** **Quiz of Coursera** and grab some premium knowledge with less effort. If this article really helped you in any way then make sure to share it with your friends on social media and let them also know about this amazing training. You can also check out our other course Answers. So, be with us guys we will share a lot more free courses and their exam/quiz solutions also, and follow our Techno-RJ **Blog** for more updates.

Thanks for all your efforts that you have put in this. very interesting info .

I want to show my appreciation for your kind-heartedness in support of persons who require help on this one content. Your very own dedication to getting the message around had become quite practical and have without exception helped associates like me to get to their aims. Your amazing important suggestions entails this much a person like me and a whole lot more to my fellow workers. Many thanks; from all of us.

Attractive part of content. I simply stumbled upon your website and in accession capital to assert that I acquire in fact enjoyed account your blog posts. Anyway I’ll be subscribing in your feeds or even I fulfillment you access consistently fast.

I’m often to running a blog and i actually admire your content. The article has really peaks my interest. I’m going to bookmark your website and hold checking for brand new information.

I do not even know how I ended up here, but I thought this post was great. I do not know who you are but definitely you’re going to a famous blogger if you are not already 😉 Cheers!

Thank you for the good writeup. It in fact was a amusement account it. Look advanced to more added agreeable from you! By the way, how can we communicate?

Thank you so much for giving everyone remarkably splendid possiblity to read articles and blog posts from this website. It’s always so superb and stuffed with a great time for me and my office peers to visit your site at the least 3 times weekly to find out the newest guidance you have. And definitely, I’m so always astounded with your sensational secrets you serve. Certain 2 points in this posting are certainly the most suitable I’ve had.

Hey there! I’m at work surfing around your blog from my new iphone 4! Just wanted to say I love reading through your blog and look forward to all your posts! Carry on the outstanding work!

Simply desire to say your article is as amazing. The clarity in your post is just spectacular and i can assume you are an expert on this subject. Well with your permission let me to grab your feed to keep updated with forthcoming post. Thanks a million and please carry on the gratifying work.

I really appreciate this post. I’ve been looking all over for this! Thank goodness I found it on Bing. You have made my day! Thank you again!

you have an ideal weblog here! would you wish to make some invite posts on my blog?

brand tadalafil 40mg tadalafil 10mg us can you buy ed pills online

buy generic duricef buy generic combivir finasteride 1mg pills

buy diflucan generic order cipro pills ciprofloxacin without prescription

estrace medication minipress over the counter minipress oral

flagyl order online order metronidazole 400mg generic keflex price

order vermox 100mg generic purchase tretinoin generic buy tadalis 20mg without prescription

cleocin tablet male ed drugs buy generic sildenafil 50mg

generic avana purchase tadacip buy cambia pills for sale

nolvadex 20mg cheap budesonide where to buy ceftin where to buy

indocin 50mg generic buy generic indocin for sale cefixime 200mg us

buy generic amoxicillin online order arimidex 1mg pill buy clarithromycin generic

buy bimatoprost generic robaxin drug trazodone medication

clonidine 0.1mg canada purchase meclizine pills tiotropium bromide 9mcg usa

sildenafil over the counter order suhagra 50mg online sildenafil 100mg england

minocycline pills order generic minocin 50mg purchase pioglitazone pill

leflunomide 20mg usa order azulfidine 500mg sale buy azulfidine paypal

accutane 20mg price accutane canada order zithromax for sale

cialis online canada order tadalafil pill order tadalafil 20mg pills

buy azithromycin 250mg generic neurontin price buy cheap gabapentin

order furosemide for sale purchase albuterol online order generic ventolin inhalator

vardenafil online order brand hydroxychloroquine 400mg order hydroxychloroquine 400mg generic

ramipril 5mg without prescription purchase etoricoxib without prescription arcoxia 60mg usa

vardenafil 20mg over the counter buy levitra generic order hydroxychloroquine 400mg pill

buy mesalamine 800mg without prescription buy mesalamine no prescription irbesartan 300mg drug

temovate online order cordarone amiodarone 100mg ca

carvedilol 6.25mg for sale cenforce 50mg us aralen 250mg price

brand diamox 250 mg buy azathioprine 50mg imuran brand

digoxin buy online lanoxin 250mg canada buy molnupiravir 200mg pills

baricitinib 2mg cheap glucophage 500mg us buy atorvastatin 20mg online

purchase albuterol generic albuterol price pyridium 200mg pills

brand singulair 10mg buy generic montelukast avlosulfon 100mg canada

order norvasc 5mg generic prilosec pill buy omeprazole generic

order adalat 30mg pill buy allegra 120mg pills buy fexofenadine tablets