Sequence Models Coursera Quiz Answers 2022 | All Weeks Assessment Answers [💯Correct Answer]

Hello Peers, Today we are going to share all week’s assessment and quiz answers of the Sequence Models course launched by Coursera totally free of cost✅✅✅. This is a certification course for every interested student.

In case you didn’t find this course for free, then you can apply for financial ads to get this course for totally free.

Check out this article “How to Apply for Financial Ads?”

About The Coursera

Coursera, India’s biggest learning platform launched millions of free courses for students daily. These courses are from various recognized universities, where industry experts and professors teach in a very well manner and in a more understandable way.


Here, you will find Sequence Models Exam Answers in Bold Color which are given below.

These answers are updated recently and are 100% correct✅ answers of all week, assessment, and final exam answers of Sequence Models from Coursera Free Certification Course.

Use “Ctrl+F” To Find Any Questions Answer. & For Mobile User, You Just Need To Click On Three dots In Your Browser & You Will Get A “Find” Option There. Use These Option to Get Any Random Questions Answer.

About Sequence Models Course

In the fifth course of the Deep Learning Specialization, you will learn about sequence models and their exciting uses, such as speech recognition, music synthesis, chatbots, machine translation, natural language processing (NLP), and more.

After completing this course, you will have the knowledge and skills necessary to construct and train Recurrent Neural Networks (RNNs) and common variants like GRUs and LSTMs; apply RNNs to Character-level Language Modeling; learn about NLP and Word Embeddings; and apply HuggingFace tokenizers and transformer models to various NLP tasks like NER and Question Answering.

The Deep Learning Specialization is an introductory course that will prepare you to take part in the creation of cutting-edge AI by teaching you the basics of the field, including its benefits, drawbacks, and future prospects. It will help you advance your career in the field of artificial intelligence by giving you the foundational information and abilities necessary to take the next big step in the field.

Course Apply Link – Sequence Models

Sequence Models Quiz Answers

Sequence Models Week 1 Coursera Quiz Answers

Q1. Suppose your training examples are sentences (sequences of words). Which of the following refers to the j^{th} word in the i^{th} training example?

  • x(i)<j>
  • x<i>(j)
  • x(j)<i>
  • x<j>(i)


Q2. Consider this RNN:

This specific type of architecture is appropriate when:

  • Tx​=Ty​
  • Tx​<Ty
  • Tx​>Ty
  • Tx​=1


Q3. To which of these tasks would you apply a many-to-one RNN architecture? (Check all that apply).

  • speech recognition (input an audio clip and output a transcript)
  • Sentiment classification (input a piece of text and output a 0/1 to denote positive or negative sentiment)
  • Image classification (input an image and output a label)
  • Gender recognition from speech (input an audio clip and output a label indicating the speaker’s gender)

Q4. You are training this RNN language model.

At the t^{th}tth time step, what is the RNN doing? Choose the best answer.

  • Estimating P(y^{<1>}, y^{<2>}, …, y^{<t-1>})P(y<1>,y<2>,…,y<t−1>)
  • Estimating P(y^{<t>})P(y<t>)
  • Estimating P(y^{<t>} \mid y^{<1>}, y^{<2>}, …, y^{<t-1>})P(y<t>∣y<1>,y<2>,…,y<t−1>)
  • Estimating P(y^{<t>} \mid y^{<1>}, y^{<2>}, …, y^{<t>})P(y<t>∣y<1>,y<2>,…,y<t>)

Q5. You have finished training a language model RNN and are using it to sample random sentences, as follows:

What are you doing at each time step t?

  • (i) Use the probabilities output by the RNN to pick the highest probability word for that time-step as \hat{y}^{<t>}y^​<t>. (ii) Then pass the ground-truth word from the training set to the next time-step.
  • (i) Use the probabilities output by the RNN to randomly sample a chosen word for that time-step as \hat{y}^{<t>}y^​<t>.(ii) Then pass the ground-truth word from the training set to the next time-step.
  • (i) Use the probabilities output by the RNN to pick the highest probability word for that time-step as \hat{y}^{<t>}y^​<t>.(ii) Then pass this selected word to the next time-step.
  • (i) Use the probabilities output by the RNN to randomly sample a chosen word for that time-step as \hat{y}^{<t>}y^​<t>.(ii) Then pass this selected word to the next time-step.

Q6. You are training an RNN, and find that your weights and activations are all taking on the value of NaN (“Not a Number”). Which of these is the most likely cause of this problem?

  • Vanishing gradient problem.
  • Exploding gradient problem.
  • ReLU activation function g(.) used to compute g(z), where z is too large.
  • Sigmoid activation function g(.) used to compute g(z), where z is too large.

Q7. Suppose you are training a LSTM. You have a 10000 word vocabulary, and are using an LSTM with 100-dimensional activations a^{<t>}a<t>. What is the dimension of \Gamma_uΓu​ at each time step?

  • 1
  • 100
  • 300
  • 10000

Q8. Here’re the update equations for the GRU.

Alice proposes to simplify the GRU by always removing the \Gamma_uΓu​. I.e., setting \Gamma_uΓu​ = 1. Betty proposes to simplify the GRU by removing the \Gamma_rΓr​. I. e., setting \Gamma_rΓr​ = 1 always. Which of these models is more likely to work without vanishing gradient problems even when trained on very long input sequences?

  • Alice’s model (removing Γu​), because if Γr​≈0 for a timestep, the gradient can propagate back through that timestep without much decay.
  • Alice’s model (removing Γu​), because if Γr​≈1 for a timestep, the gradient can propagate back through that timestep without much decay.
  • Betty’s model (removing Γr​), because if Γu​≈0 for a timestep, the gradient can propagate back through that timestep without much decay.
  • Betty’s model (removing Γr​), because if Γu​≈1 for a timestep, the gradient can propagate back through that timestep without much decay.

Q9. Here are the equations for the GRU and the LSTM:

From these, we can see that the Update Gate and Forget Gate in the LSTM play a role similar to _______ and ______ in the GRU. What should go in the blanks?1 point

  • Γu​ and 1−Γu
  • Γu​ and Γr
  • 1−Γu​ and Γu
  • Γr​ and Γu

Q10. You have a pet dog whose mood is heavily dependent on the current and past few days’ weather. You’ve collected data for the past 365 days on the weather, which you represent as a sequence as x^{<1>}, …, x^{<365>}x<1>,…,x<365>. You’ve also collected data on your dog’s mood, which you represent as y^{<1>}, …, y^{<365>}y<1>,…,y<365>. You’d like to build a model to map from x \rightarrow yxy. Should you use a Unidirectional RNN or Bidirectional RNN for this problem?

  • Bidirectional RNN, because this allows the prediction of mood on day t to take into account more information.
  • Bidirectional RNN, because this allows backpropagation to compute more accurate gradients.
  • Unidirectional RNN, because the value of y^{<t>}y<t> depends only on x^{<1>}, …, x^{<t>}x<1>,…,x<t>, but not on x^{<t+1>}, …, x^{<365>}x<t+1>,…,x<365>
  • Unidirectional RNN, because the value of y^{<t>}y<t> depends only on x^{<t>}x<t>, and not other days’ weather.

Sequence Models Week 2 Coursera Quiz Answers

Q1. Suppose you learn a word embedding for a vocabulary of 10000 words. Then the embedding vectors should be 10000 dimensional, so as to capture the full range of variation and meaning in those words.

  • True
  • False

Q2. What is t-SNE?

  • An open-source sequence modeling library
  • A supervised learning algorithm for learning word embeddings
  • A non-linear dimensionality reduction technique
  • A linear transformation that allows us to solve analogies on word vectors

Q3. Suppose you download a pre-trained word embedding which has been trained on a huge corpus of text. You then use this word embedding to train an RNN for a language task of recognizing if someone is happy from a short snippet of text, using a small training set.

x (input text)y (happy?)
I’m feeling wonderful today!1
I’m bummed my cat is ill.0
Really enjoying this!1

Then even if the word “ecstatic” does not appear in your small training set, your RNN might reasonably be expected to recognize “I’m ecstatic” as deserving a label

  • True
  • False

Q4. Which of these equations do you think should hold for a good word embedding? (Check all that apply)

  • e_{boy} – e_{girl} \approx e_{brother} – e_{sister}
  • e_{boy} – e_{brother} \approx e_{girl} – e_{sister}

Q5. Let EE be an embedding matrix, and let o_{1234}o
1234

be a one-hot vector corresponding to word 1234. Then to get the embedding of word 1234, why don’t we call E * o_{1234}E∗o 1234 in Python?

  • None of the above: calling the Python snippet as described above is fine.
  • It is computationally wasteful.
  • The correct formula is E^T* o_{1234}ETo1234​.
  • This doesn’t handle unknown words (<UNK>).

Q6. When learning word embeddings, we create an artificial task of estimating P(target \mid context)P(target∣context). It is okay if we do poorly on this artificial prediction task; the more important by-product of this task is that we learn a useful set of word embeddings.

  • True
  • False

Q7. In the word2vec algorithm, you estimate P(t \mid c)P(t∣c), where tt is the target word and cc is a context word. How are tt and cc chosen from the training set? Pick the best answer.

  • c is the one word that comes immediately before t.
  • c is the sequence of all the words in the sentence before t.
  • c and t are chosen to be nearby words.
  • c is a sequence of several words immediately before t.

Q8. Suppose you have a 10000 word vocabulary, and are learning 500-dimensional word embeddings. The word2vec model uses the following softmax function: P(tc)=∑t’=110000​eθtTeceθtTec​​

  • After training, we should expect \theta_tθt​ to be very close to e_cec​ when tt and cc are the same word.
  • θt​ and e_cec​ are both 10000 dimensional vectors.
  • θt​ and e_cec​ are both trained with an optimization algorithm such as Adam or gradient descent.
  • θt​ and e_cec​ are both 500 dimensional vectors.

Q9. Suppose you have a 10000 word vocabulary, and are learning 500-dimensional word embeddings.The GloVe model minimizes this objective: min∑i=110,000​∑j=110,000​f(Xij​)(θiTej​+bi​+bj​’−logXij​)2

Which of these statements are correct? Check all that apply.

  • The weighting function f(.)f(.) must satisfy f(0) = 0f(0)=0.
  • Xij​ is the number of times word j appears in the context of word i
  • θi​ and e_jej​ should be initialized to 0 at the beginning of training.
  • θi​ and e_jej​ should be initialized randomly at the beginning of training.

Q10.

You have trained word embeddings using a text dataset of m_1m1​ words. You are considering using these word embeddings for a language task, for which you have a separate labeled dataset of m_2m2​ words. Keeping in mind that using word embeddings is a form of transfer learning, under which of these circumstances would you expect the word embeddings to be helpful?

  • m1​ >> m2​
  • m1​ <<m2​

Sequence Models Week 3 Coursera Quiz Answers

Q1. Consider using this encoder-decoder model for machine translation.

This model is a “conditional language model” in the sense that the encoder portion (shown in green) is modeling the probability of the input sentence xx.

  • False
  • True

Q2. In beam search, if you increase the beam width BB, which of the following would you expect to be true? Check all that apply.

  • Beam search will converge after fewer steps.
  • Beam search will generally find better solutions (i.e. do a better job maximizing P(y \mid x)P(y∣x))
  • Beam search will use up more memory.
  • Beam search will run more slowly.

Q3. In machine translation, if we carry out beam search without using sentence normalization, the algorithm will tend to output overly short translations.

  • True
  • False

Q4. Suppose you are building a speech recognition system, which uses an RNN model to map from audio clip xx to a text transcript yy. Your algorithm uses beam search to try to find the value of yy that maximizes P(y∣x).

P(y^​∣x)=1.09∗10−7

P(y^* \mid x) = 7.21*10^-8P(y∗∣x)=7.21∗10−8

Would you expect increasing the beam width B to help correct this example?1 point

  • No, because P(y^* \mid x) \leq P(\hat{y} \mid x)P(y∗∣x)≤P(y^​∣x) indicates the error should be attributed to the RNN rather than to the search algorithm.
  • No, because P(y^* \mid x) \leq P(\hat{y} \mid x)P(y∗∣x)≤P(y^​∣x) indicates the error should be attributed to the search algorithm rather than to the RNN.
  • Yes, because P(y^* \mid x) \leq P(\hat{y} \mid x)P(y∗∣x)≤P(y^​∣x) indicates the error should be attributed to the search algorithm rather than to the RNN.
  • Yes, because P(y^* \mid x) \leq P(\hat{y} \mid x)P(y∗∣x)≤P(y^​∣x) indicates the error should be attributed to the RNN rather than to the search algorithm.

Q5. Continuing the example from Q4, suppose you work on your algorithm for a few more weeks, and now find that for the vast majority of examples on which your algorithm makes a mistake, P(y^* \mid x) > P(\hat{y} \mid x)P(y∗∣x)>P(y^∣x). This suggests you should focus your attention on improving the search algorithm.

  • True.
  • False.

Q6. Consider the attention model for machine translation.

Further, here is the formula for \alpha^{}α

Which of the following statements about \alpha^{}α are true? Check all that apply.

  • We expect α<t,t′> to be generally larger for values of a<t′> that are highly relevant to the value the network should output for y. (Note the indices in the superscripts.)
  • ∑t′α<t,t′>=1 (Note the summation is over t′.)

Q7. The network learns where to “pay attention” by learning the values e^{}e , which are computed using a small neural network:

We can’t replace s^{<t-1>}s<t−1> with s^{<t>}s<t> as an input to this neural network. This is because s^{<t>}s<t> depends on \alpha^{<t,t’>}α<t,t’> which in turn depends on e^{<t,t’>}e<t,t’>; so at the time we need to evaluate this network, we haven’t computed s^{<t>}s<t> yet.

  • False
  • True

Q8. Compared to the encoder-decoder model shown in Question 1 of this quiz (which does not use an attention mechanism), we expect the attention model to have the greatest advantage when:

  • The input sequence length T_xTx​ is large.
  • The input sequence length T_xTx​ is small.

Q9. Under the CTC model, identical repeated characters not separated by the “blank” character (_) are collapsed. Under the CTC model, what does the following string collapse to?

_c_oo_o_kk___b_ooooo__oo__kkk

  • cokbok
  • coookkboooooookkk
  • cookbook
  • cook book

Q10. In trigger word detection, x^{}x is:

  • Whether someone has just finished saying the trigger word at time tt.
  • The tt-th input word, represented as either a one-hot vector or a word embedding.
  • Whether the trigger word is being said at time tt.
  • Features of the audio (such as spectrogram features) at time tt.

Sequence Models Week 4 Coursera Quiz Answers

Q1. A Transformer Network, like its predecessors RNNs, GRUs and LSTMs, can process information one word at a time. (Sequential architecture).

  • False
  • True

Q2. Transformer Network methodology is taken from: (Check all that apply)

  • Attention mechanism.
  • Convolutional Neural Network style of architecture.
  • None of these.
  • Convolutional Neural Network style of processing.

Q3. The concept of Self-Attention is that:6a5064e9 9d50 4439 97c9 3517e1f778daimage31 point

  • Given a word, its neighbouring words are used to compute its context by selecting the highest of those word values to map the Attention related to that given word.
  • Given a word, its neighbouring words are used to compute its context by selecting the lowest of those word values to map the Attention related to that given word.
  • Given a word, its neighbouring words are used to compute its context by taking the average of those word values to map the Attention related to that given word.
  • Given a word, its neighbouring words are used to compute its context by summing up the word values to map the Attention related to that given word.

Q4. Which of the following correctly represents Attention ?

  • Attention(Q, K, V) = softmax(\frac{QK^T}{\sqrt{d_k}})VAttention(Q,K,V)=softmax(dk​​QKT​)V
  • Attention(Q, K, V) = softmax(\frac{QV^T}{\sqrt{d_k}})KAttention(Q,K,V)=softmax(dk​​QVT​)K
  • Attention(Q, K, V) = min(\frac{QV^T}{\sqrt{d_k}})KAttention(Q,K,V)=min(dk​​QVT​)K
  • Attention(Q, K, V) = min(\frac{QK^T}{\sqrt{d_k}})VAttention(Q,K,V)=min(dk​​QKT​)V

Q5. Are the following statements true regarding Query (Q), Key (K) and Value (V) ?

Q = interesting questions about the words in a sentence

K = specific representations of words given a Q

V = qualities of words given a Q

  • False
  • True

Q6. 6a5064e9 9d50 4439 97c9 3517e1f778daimage4

ii here represents the computed attention weight matrix associated with the ithith “word” in a sentence.

  • False
  • True

Q7. Following is the architecture within a Transformer Network. (without displaying positional encodingand output layers(s))6a5064e9 9d50 4439 97c9 3517e1f778daimage1

What information does the Decoder take from the Encoder for its second block of Multi-Head Attention ? (Marked XX, pointed by the independent arrow)

(Check all that apply)

  • K
  • Q
  • V

Q8. Following is the architecture within a Transformer Network. (without displaying positional encoding and output layers(s))6a5064e9 9d50 4439 97c9 3517e1f778daimage2

What is the output layer(s) of the Decoder ? (Marked YY, pointed by the independent arrow)1 point

  • Softmax layer
  • Linear layer
  • Softmax layer followed by a linear layer.
  • Linear layer followed by a softmax layer.

Q9. Why is positional encoding important in the translation process? (Check all that apply)1 point

  • Position and word order are essential in sentence construction of any language.
  • It helps to locate every word within a sentence.
  • It is used in CNN and works well there.
  • Providing extra information to our model.

Q10. Which of these is a good criteria for a good positionial encoding algorithm?1 point

  • It should output a unique encoding for each time-step (word’s position in a sentence).
  • Distance between any two time-steps should be consistent for all sentence lengths.
  • The algorithm should be able to generalize to longer sentences.
  • None of the these.

More About This Course

After completing this course, you will have the knowledge and skills necessary to construct and train Recurrent Neural Networks (RNNs) and common variants like GRUs and LSTMs; apply RNNs to Character-level Language Modeling; learn about NLP and Word Embeddings; and apply HuggingFace tokenizers and transformer models to various NLP tasks like NER and Question Answering.

The Deep Learning Specialization is an introductory course that will prepare you to take part in the creation of cutting-edge AI by teaching you the basics of the field, including its benefits, drawbacks, and future prospects. It will help you advance your career in the field of artificial intelligence by giving you the foundational information and abilities necessary to take the next big step in the field.

SKILLS YOU WILL GAIN

  • Natural Language Processing
  • Long Short Term Memory (LSTM)
  • Gated Recurrent Unit (GRU)
  • Recurrent Neural Network
  • Attention Models

Conclusion

Hopefully, this article will be useful for you to find all the Week, final assessment, and Peer Graded Assessment Answers of Sequence Models Quiz of Coursera and grab some premium knowledge with less effort. If this article really helped you in any way then make sure to share it with your friends on social media and let them also know about this amazing training. You can also check out our other course Answers. So, be with us guys we will share a lot more free courses and their exam/quiz solutions also, and follow our Techno-RJ Blog for more updates.

143 thoughts on “Sequence Models Coursera Quiz Answers 2022 | All Weeks Assessment Answers [💯Correct Answer]”

  1. Very good website you have here but I was wanting to know if you knew of any message boards that cover the same topics
    discussed here? I’d really like to be a part of community where
    I can get suggestions from other knowledgeable individuals that share the same
    interest. If you have any recommendations, please let me
    know. Appreciate it!

    Reply
  2. Not sure which online casino is best for you? Let us help! Action Network has created The Best Real Money Online Casinos to help you narrow down your list of potential casinos to play. Despite being one of the newest online casinos, RedDog Casino has a lot to offer. This casino has a lot to offer, from a wide variety of games to some great bonuses. And, most importantly, they’re a reputable and trustworthy casino. If you’re searching for a new online casino to try, RedDog Casino is definitely worth a shot. In live casinos, you may be frustrated by someone else playing on your favorite slot, or that the blackjack tables at your stakes are full. That’s no problem online, where you can play computer blackjack at any time. Note, however, that your online live dealer casino tables for blackjack can fill up with virtual players.
    http://artoms.co.kr/bbs/board.php?bo_table=free&wr_id=39188
    A no-deposit bonus allows you to learn more about the casino. You can begin playing your favorite casino games as soon as the free cash is credited to your account. You easily have the liberty of freedom to choose whether or not to continue playing the casino’s games. Below we have two bonus lists, one for mobile casino no deposit free spins offers and one with free money casino bonuses – and all you have to do is sign up to claim them. We highly recommend you take a look. All UK regulated, online casinos must abide by the regulations issued by the Gambling Commission and as such, you are requested to provide additional information so that The Phone Casino can meet the Gambling Commission’s requirements for: Welcome offers are all very well, but they only cater for new players to a platform. What about loyal, existing players? They would feel pretty hard done by if there is nothing for them. So online gaming houses offer ongoing no deposit mobile casino bonus codes and deals to reward player loyalty.

    Reply
  3. I have been exploring for a bit for any high quality articles or weblog posts on this sort of
    space . Exploring in Yahoo I ultimately stumbled upon this web site.

    Studying this information So i’m satisfied to show that I’ve a very just right uncanny feeling I discovered exactly what
    I needed. I most definitely will make certain to do not overlook this site and give it a look
    regularly.

    Reply
  4. Have you ever thought about publishing an e-book or guest authoring on other sites?
    I have a blog based upon on the same topics you discuss and
    would love to have you share some stories/information. I know my subscribers would value your work.
    If you are even remotely interested, feel free to send me an e-mail.

    Reply
  5. When I initially commented I clicked the “Notify me when new comments are added” checkbox and now each time
    a comment is added I get four e-mails with the same
    comment. Is there any way you can remove me from that service?
    Bless you!

    Reply

Leave a Comment

Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker🙏.

Powered By
Best Wordpress Adblock Detecting Plugin | CHP Adblock