Probabilistic Graphical Models 1: Representation Coursera Quiz Answers 2022 | All Weeks Assessment Answers [💯Correct Answer]

Hello Peers, Today we are going to share all week’s assessment and quizzes answers of the Probabilistic Graphical Models 1: Representation course launched by Coursera totally free of cost✅✅✅. This is a certification course for every interested student.

In case you didn’t find this course for free, then you can apply for financial ads to get this course for totally free.

Check out this article “How to Apply for Financial Ads?”

About The Coursera

Coursera, India’s biggest learning platform launched millions of free courses for students daily. These courses are from various recognized universities, where industry experts and professors teach in a very well manner and in a more understandable way.


Here, you will find Probabilistic Graphical Models 1: Representation Exam Answers in Bold Color which are given below.

These answers are updated recently and are 100% correct✅ answers of all week, assessment, and final exam answers of Probabilistic Graphical Models 1: Representation from Coursera Free Certification Course.

Use “Ctrl+F” To Find Any Questions Answer. & For Mobile User, You Just Need To Click On Three dots In Your Browser & You Will Get A “Find” Option There. Use These Option to Get Any Random Questions Answer.

About Probabilistic Graphical Models 1: Representation Course

In order to encode joint (multivariate) distributions across huge numbers of random variables that interact, probabilistic graphical models (PGMs) provides a robust framework. These representations rely on ideas from probability theory, graph algorithms, machine learning, and other areas that sit at the crossroads of statistics and computer science. They form the basis for cutting-edge approaches in numerous fields, including medicine, computer science, linguistics, physics, and many more besides. They also serve as the backbone of many machine learning problem formulations.

This is the initial of a three-part series of classes. It explains the difference between Bayesian networks, which utilize a directed graph, and Markov networks, which use an undirected graph, as the two primary representations of PGMs. Course material covers both the theoretical underpinnings and practical applications of these representations.

The honors program includes multiple practical exercises on how to model various real-world issues and is strongly suggested for those who wish to excel academically. Important expansions to the fundamental PGM format are also covered in the course, allowing for the compact encoding of increasingly complex models.

SKILLS YOU WILL GAIN

  • Bayesian Network
  • Graphical Model
  • Markov Random Field

Course Apply Link – Probabilistic Graphical Models 1: Representation

Probabilistic Graphical Models 1: Representation Quiz Answers

Week 1 Quiz Answers

Also Checkout this Article: Har Ghar Tiranga Campaign 2022 | Free Independence Day Certificate | Government Of India

Quiz 1: Basic Definitions

Q1. Factor product.

Let X, YX,Y be binary variables, and let ZZ be a variable that takes on values 1, 2, or 3.

If \phi_1(X,Y)ϕ1​(X,Y) and \phi_2(Y, Z)ϕ2​(Y,Z) are the factors shown below, compute the selected entries (marked by a ‘?’) in the factor \psi(X, Y, Z) = \phi_1(X,Y) \cdot \phi_2(Y, Z)ψ(X,Y,Z)=ϕ1​(X,Y)⋅ϕ2​(Y,Z), giving your answer according to the ordering of assignments to variables as shown below.

Separate each of the 3 entries of the factor with spaces, e.g., an answer of

0.1 0.2 0.3

means that \psi(1,1,2) = 0.1ψ(1,1,2)=0.1, \psi(1,2,1) = 0.2ψ(1,2,1)=0.2, and \psi(2,1,3) = 0.3ψ(2,1,3)=0.3. Give your answers as exact decimals without any trailing zeroes.

Q2. Factor reduction.

Let X, ZX,Z be binary variables, and let YY be a variable that takes on values 1, 2, or 3.

Now say we observe Y=1Y=1. If \phi(X,Y,Z)ϕ(X,Y,Z) is the factor shown below, compute the missing entries of the reduced factor \psi(X, Z) ψ(X,Z) given that Y=1Y=1, giving your answer according to the ordering of assignments to variables as shown below.

As before, separate the 4 entries of the factor by spaces.

Q3. Properties of independent variables.

Assume that A and B are independent random variables. Which of the following options are always true? You may select 1 or more options.

  • P(B|A) = P(B)P(BA)=P(B)
  • P(A,B) = P(A) \times P(B)P(A,B)=P(AP(B)
  • P(A) = P(B)P(A)=P(B)
  • P(A) \neq P(B)P(A)​=P(B)

Q4. Factor marginalization.

Let X, ZX,Z be binary variables, and let YY be a variable that takes on values 1, 2, or 3.

If \phi(X,Y,Z)ϕ(X,Y,Z) is the factor shown below, compute the entries of the factor

\psi(Y, Z) = \sum_X \phi(X,Y,Z)ψ(Y,Z)=∑Xϕ(X,Y,Z),

giving your answer according to the ordering of assignments to variables as shown below.

Separate the 4 entries of the factor with spaces, and do not add any extra trailing or leading zeroes or decimal points.

Also Checkout this Article:

Quiz 2: Bayesian Network Fundamentals

Q1. Factorization.

  • P(A,B,C,D) = P(A)P(B)P(C|A)P(C|B)P(D|B)P(A,B,C,D)=P(A)P(B)P(CA)P(CB)P(DB)
  • P(A,B,C,D) = P(A)P(B)P(C|A,B)P(D|B)P(A,B,C,D)=P(A)P(B)P(CA,B)P(DB)
  • P(A,B,C,D) = P(A)P(B)P(C)P(D)P(A,B,C,D)=P(A)P(B)P(C)P(D)
  • P(A,B,C,D) = P(A)P(B)P(A,B|C)P(B|D)P(A,B,C,D)=P(A)P(B)P(A,BC)P(BD)

Q2. Independent parameters.

If you haven’t come across the term before, here’s a brief explanation: A multinomial distribution over mm possibilities x_1,…,x_mx1​,…,xm​ has mm parameters, but m−1m−1 independent parameters, because we have the constraint that all parameters must sum to 1, so that if you specify m−1m−1 of the parameters, the final one is fixed. In a CPD P(X|Y)P(XY), if XX has mm values and YY has kk values, then we have kk distinct multinomial distributions, one for each value of YY, and we have m−1m−1 independent parameters in each of them, for a total of k(m−1)k(m−1). More generally, in a CPD P(X|Y_1,…,Y_r)P(XY1​,…,Yr​), if each Y_iYi​ has k_iki​ values, we have a total of k_1 \times \ldots \times k_r \times (m−1)k1​×…×kr​×(m−1) independent parameters.

Example: Let’s say we have a graphical model that just had X \rightarrow YXY, where both variables are binary. In this scenario, we need 1 parameter to define the CPD of XX. The CPD of XX contains two entries P(X = 0)P(X=0) and P(X=1)P(X=1). Since the sum of these two entries has to be equal to 1, we only need one parameter to define the CPD.

Now we look at YY. The CPD for YY contains 4 entries which correspond to: P(Y = 0 | X = 0), P(Y = 1 | X = 0), P(Y = 0 | X = 1), P(Y = 1 | X = 1)P(Y=0∣X=0),P(Y=1∣X=0),P(Y=0∣X=1),P(Y=1∣X=1). Note that P(Y = 0 | X = 0)P(Y=0∣X=0) and P(Y = 1 | X = 0)P(Y=1∣X=0) should sum to one, so we need 1 independent parameter to describe those two entries; likewise, P(Y = 0 | X = 1)P(Y=0∣X=1) and P(Y = 1 | X = 1)P(Y=1∣X=1) should also sum to 1, so we need 1 independent parameter for those two entries.

Therefore, we need 1 independent parameter to define the CPD of XX and 2 independent parameters to define the CPD of YY.

  • 4
  • 3
  • 12
  • 8
  • 6
  • 7
  • 11

Q3. *Inter-causal reasoning.

Calculate P(Accident = 1 | Traffic = 1) and P(Accident = 1 | Traffic = 1, President = 1). Separate your answers with a space, e.g., an answer of

0.15 0.25

means that P(Accident = 1 | Traffic = 1) = 0.15 and P(Accident = 1 | Traffic = 1, President = 1) = 0.25. Round your answers to two decimal places and write a leading zero, like in the example above.

Quiz 3: Bayesian Network Independencies

Q1. Independencies in a graph.

Which pairs of variables are independent in the graphical model below, given that none of them have been observed? You may select 1 or more options.

  • A, B
  • C, D
  • A, E
  • D, E
  • None – there are no pairs of independent variables.

Q2. *Independencies in a graph. (An asterisk marks a question that is more challenging. Congratulations if you get it right!)

  • None – given E, there are no pairs of variables that are independent.
  • A, B
  • A, C
  • A, D
  • B, D
  • D, C
  • B, C

Q3. I-maps. I-maps can also be defined directly on graphs as follows. Let I(G)I(G) be the set of independencies encoded by a graph GG. Then G_1G1​ is an I-map for G_2G2​ if I(G_1) \subseteq I(G_2)I(G1​)⊆I(G2​).

Which of the following statements about I-maps are true? You may select 1 or more options.

  • A graph K is an I-map for a graph G if and only if all of the independencies encoded by K are also encoded by G.
  • A graph K is an I-map for a graph G if and only if K encodes all of the independences that G has and more.
  • An I-map is a function ff that maps a graph G to itself, i.e., f(G) = Gf(G)=G.
  • The graph K that is the same as the graph G, except that all of the edges are oriented in the opposite direction as the corresponding edges in G, is always an I-map for G, regardless of the structure of G.
  • I-maps are Apple’s answer to Google Maps

Q4. *Naive Bayes.

Assume a population size of 10,000. Which of the following statements are true in this model? You may select 1 or more options.

  • Say we observe that 10001000 people have the flu, out of which 500500 people have a headache (and possibly other symptoms) and 500500 have a fever (and possibly other symptoms).
  • We would expect that approximately 250250 people with the flu also have both a headache and fever.
  • Say we observe that 10001000 people have a headache (and possibly other symptoms), out of which 500500 people have the flu (and possibly other symptoms), and 500500 people have a fever (and possibly other symptoms).
  • We would expect that approximately 250250 people with a headache also have both the flu and a fever.
  • Say we observe that 500500 people have a headache (and possibly other symptoms) and 500500 people have a fever (and possibly other symptoms).
  • Without more information, we cannot estimate how many people have both a headache and fever.
  • Say we observe that 500500 people have a headache (and possibly other symptoms) and 500500 people have a fever (and possibly other symptoms).
  • We would expect that approximately 250250 people have both a headache and fever.

Q5. Question 5

I-maps.

Suppose (A \perp B) \in \mathcal{I}(P)(AB)∈I(P), and GG is an I-map of PP, where GG is a Bayesian network and PP is a probability distribution. Is it necessarily true that (A \perp B) \in \mathcal{I}(G)(AB)∈I(G)?

  • Yes
  • No

Quiz 4: Octave/Matlab installation

Q1. The platform requires us to have one graded assignment in every honors lesson, so we have to ask: have you successfully installed Octave or MATLAB?

  • Yes
  • No

Week 2 Quiz Answers

Quiz 1: Template Models

Q1. Markov Assumption.

If a dynamic system XX satisfies the Markov assumption for all time t \geq 0t≥0, which of the following statements must be true? You may select 1 or more options.

  • ( X^{(t+1)} \perp X^{(0:(t-1))} | X^{(t)} )(X(t+1)⊥X(0:(t−1))∣X(t))
  • P(X^{(t+1)}) = P(X^{(t-1)})P(X(t+1))=P(X(t−1)) for all possible values of XX
  • (X^{(t+1)} \perp X^{(0:(t-1))})(X(t+1)⊥X(0:(t−1)))

Q2. Independencies in DBNs.

  • (X^{(t+1)} \perp X^{(t)} \mid X^{(t-1)})(X(t+1)⊥X(t)∣X(t−1))
  • (O^{(t)} \perp X^{(t-1)} \mid X^{(t)})(O(t)⊥X(t−1)∣X(t))
  • (O^{(t)} \perp O^{(t-1)})(O(t)⊥O(t−1))
  • (X^{(t)} \perp X^{(t-1)})(X(t)⊥X(t−1))

Q3. Applications of DBNs.

For which of the following applications might one use a DBN (i.e. the Markov assumption is satisfied)? You may select 1 or more options.

  • Modeling the behavior of people, where a person’s behavior is influenced by only the behavior of people in the same generation and the people in his/her parents’ generation.
  • Modeling data taken at different locations along a road, where the data at each location is influenced by only the data at the same location and at the location directly to the East
  • Modeling time-series data, where the events at each time-point are influenced by only the events at the one time-point directly before it
  • Predicting the probability that today will be a snow day (school will be closed because of the snow), when this probability depends only on whether yesterday was a snow day.

Q4. Plate Semantics.

“Let A and B be random variables inside a common plate indexed by i. Which of the following statements must be true? You may select 1 or more options.

  • For each i, A(i) and B(i) have the same CPDs.
  • For each i, A(i) and B(i) have edges connecting them to the same variables outside of the plate.
  • If there is an instance of A for some i, then there is no instance of B for that i.
  • There is an instance of A and an instance of B for every i.

Q4. *Plate Interpretation.

  • Whether a specific teacher T taught a specific course C at school S
  • None of these options can represent X in the grounded model
  • Whether a specific teacher T is a tough grader
  • Whether someone with expertise E taught something of difficulty D at a place in location L
  • Whether someone with expertise E taught something of difficulty D at school S

Q6. Grounded Plates.

Using the same plate model, now assume that there are ss schools, tt teachers in each school, and cc courses taught by each teacher. How many instances of the Location variable are there?

  • ss
  • stcstc
  • s^2s2
  • tt

Q7.

  • K \cdot L \cdot MKLM
  • K \cdot (L + M)K⋅(L+M)
  • L \cdot MLM
  • (L \cdot M)^K(LM)K

Q8. Template Models. Consider the plate model from the previous question. What might P represent?

  • Whether a specific product PROD was consumed by consumer C in market M
  • Whether a specific product PROD was consumed by consumer C in all markets
  • Whether a specific product of brand q was consumed by a consumer with age t in a market of type m that is in location a
  • Whether a specific product PROD was consumed by consumer C in market M in location L

Q9.

  • (a)
  • (b)
  • (c)

Q10. *Unrolling DBNs. Below are 2-TBNs that could be unrolled into DBNs. Consider these unrolled DBNs (note that there are no edges within the first time-point). In which of them will (X^{(t)} \perp Z^{(t)} \mid Y^{(t)})(X(t)⊥Z(t)∣Y(t)) hold for all tt, assuming Obs^{(t)}Obs(t) is observed for all tt and X^{(t)}X(t) and Z^{(t)}Z(t) are never observed? You may select 1 or more options.

  • (a)
  • (b)
  • (c)

Quiz 2: Structured CPDs

Q1. Causal Influence. Consider the CPD below. What is the probability that E = e_0E=e0​ in the following graph, given an observation A = a_0, B = b_1, C = c_1, D = d_1A=a0​,B=b1​,C=c1​,D=d1​? Note that, for the pairs of probabilities that make up the leaves, the probability on the left is the probability of e_0e0​, and the probability on the right is the probability of e_1e1​.

Q2. Independencies with Deterministic Functions. In the following Bayesian network, the node B is a deterministic function of its parent A. Which of the following is an independence statement that holds in the network? You may select 1 or more options.

  • (A \perp B \mid C,D)(ABC,D)
  • (B \perp D \mid C)(BDC)
  • (A \perp D \mid B)(ADB)
  • (C \perp D \mid B)(CDB)

Q3. Independencies in Bayesian Networks. For the network in the previous question, let B no longer be a deterministic function of its parent A. Which of the following is an independence statement that holds in the modified Bayesian network? You may select 1 or more options.

  • (B \perp D \mid A)(BDA)
  • (C \perp D \mid B)(CDB)
  • (C \perp D \mid A)(CDA)
  • (A \perp D \mid C)(ADC)

Q4. Context-Specific Independencies in Bayesian Networks. Which of the following are context-specific independences that do exist in the tree CPD below? (Note: Only consider independencies in this CPD, ignoring other possible paths in the network that are not shown here. You may select 1 or more options.)

  • (E \perp_c C \mid b^0, d^0)(EcCb0,d0)
  • (E \perp_c D \mid a^0)(EcDa0)
  • (E \perp_c D \mid b^1)(EcDb1)
  • (E \perp_c D, B \mid a^1)(EcD,Ba1)

Quiz 3: BNs for Genetic Inheritance PA Quiz

Q1. This quiz is a companion quiz to Programming Assignment: Bayes Nets for Genetic Inheritance. Please refer to the writeup for the programming assignment for instructions on how to complete this quiz.

James and Rene come to a genetic counselor because they are deciding whether to have another child or adopt. They want to know the probability that their un-born child will have cystic fibrosis.

Consider the Bayesian network for cystic fibrosis. We consider a person’s phenotype variable to be “observed” if the person’s phenotype is known. Order the probabilities of their un-born child having cystic fibrosis in the following situations from smallest to largest: (1) No phenotypes are observed (nothing clicked), (2) Jason has cystic fibrosis, (3) Sandra has cystic fibrosis.

  • (3), (1), (2)
  • (1), (2), (3)
  • (3), (2), (1)
  • (2), (3), (1)
  • (1), (3), (2)

Q2. James never knew his father Ira because Ira passed away in an accident when James was a few months old. Now James comes to the genetic counselor wanting to know if Ira had cystic fibrosis. The genetic counselor wants your help in determining the probability that Ira had cystic fibrosis. Consider the Bayesian network for cystic fibrosis. We consider a person’s phenotype variable to be “observed” if the person’s phenotype is known. Order the probabilities of Ira having had cystic fibrosis in the following situations from smallest to largest: (1) No phenotypes are observed (nothing clicked), (2) Benjamin has cystic fibrosis, (3) Benjamin and Robin have cystic fibrosis.

  • (1), (3), (2)
  • (3), (2), (1)
  • (2), (3), (1)
  • (1), (2), (3)
  • (3), (1), (2)

Q3. Recall that, for a trait with 2 alleles, the CPD for genotype given parents’ genotypes has 27 entries, and 18 parameters were needed to specify the distribution. How many parameters would be needed if the trait had 3 alleles instead of 2?

Q4. You will now gain some intuition for why decoupling a Bayesian network can be worthwhile. Consider a non-decoupled Bayesian network for cystic fibrosis with 3 alleles over the pedigree that was used in section 2.4 and 3.3. How many parameters are needed to specify all probability distributions across the entire network?

Q5. Now consider the decoupled Bayesian network for cystic fibrosis with 3 alleles over the pedigree that was used in section 2.4 and 3.3. How many parameters are needed to specify all of the probability distributions across the entire network?

Hint: A child cannot inherit an allele that is not present in either parent, so there aren’t as many degrees of freedom here as there might be without that context-specific information.

Q6. Consider the decoupled Bayesian network for cystic fibrosis with three alleles that you constructed in section 3.3. We consider a person’s gene copy variable to be “observed” if the person’s allele for that copy of the gene is known.

James and Rene are debating whether to have another child or adopt a child. They are concerned that, if they have a child, the child will have cystic fibrosis because both of them have one F allele observed (their other gene copy is not observed), even though neither of them have cystic fibrosis. You want to give them advice, but they refuse to tell you whether anyone else in their family has cystic fibrosis. What is the probability (NOT a percentage) that their unborn child will have cystic fibrosis?

Q7. Consider a Bayesian network for spinal muscular atrophy (SMA), in which there are multiple genes and 2 phenotypes.

Let nn be the number of genes involved in SMA and mm be the maximum number of alleles per gene. How many parameters are necessary if we use a table CPD for the probabilities for phenotype given copies of the genes from both parents?

  • O(m^2)O(m2)
  • Depends on the phenotype
  • O(mn)O(mn)
  • O(m+n)O(m+n)
  • O(n)O(n)
  • O(2^n)O(2n)
  • O(m^{2n})O(m2n)
  • O(4^n)O(4n)

Q8. Consider the Bayesian network for spinal muscular atrophy (SMA), in which there are multiple genes and two phenotypes.

Let nn be the number of genes involved in SMA and mm be the maximum number of alleles per gene. How many parameters are necessary if we use a sigmoid CPD for the probabilities for phenotype given copies of the genes from both parents?

  • O(max(m,n))O(max(m,n))
  • O(m)O(m)
  • O((mn)^2)O((mn)2)
  • Depends on the phenotype
  • O(m^2n)O(m2n)
  • O(mn)O(mn)
  • O(n)O(n)
  • O(m+n)O(m+n)

Q9. Consider genes A and B that might be involved in spinal muscular atrophy. Assume that A has 2 alleles A_1A1​ and A_2A2​, and B has 2 alleles, B_1B1​ and B_2B2​. Which of the following relationships between A and B can a sigmoid CPD capture?

  • Allele A_1A1​ and allele B_1B1​ make a person equally more likely to have SMA, but when both are present the effect on SMA is the same as when only one is present.
  • Neither gene A nor gene B contribute to SMA.
  • Allele A_1A1​ and allele B_1B1​ make a person more likely to be have SMA when both of these alleles are present, but neither affect SMA otherwise.
  • Allele A_1A1​ makes a person more likely to have SMA, while allele B_1B1​ independently makes a person less likely to have SMA.
  • When the alleles are A_1A1​ and B_2B2​ or A_2A2​ and B_1B1​ the person has SMA; otherwise the person does not have SMA.
  • Gene A contributes to SMA, but gene B does not contribute to SMA and thus does not affect the effects of gene A on SMA.
  • Alleles A_1A1​ and B_1B1​ each independently make a person likely to have SMA.

Q10. Consider the Bayesian network for spinal muscular atrophy that we provided in spinalMuscularAtrophyBayesNet.net. We consider a person’s gene copy variable to be “observed” if the person’s allele for that copy of that gene is known.

Now say that Ira and Robin come to the genetic counselor because they are debating whether to have a biological child or adopt and are concerned that their child might have spinal muscular atrophy. They have some genetic information, but because sequencing is still far too expensive to be affordable for everyone, their information is limited to only a few genes and to only 1 chromosome in each pair of chromosomes.

Order the probabilities of their un-born child having spinal muscular atrophy in the following situations from smallest to largest: (1) No genetic information or phenotypes are observed (nothing clicked), (2) Ira and Robin each have at least 1 M allele, (3) Ira and Robin each have at least 1 M allele and at least 1 B allele.

  • (3), (1), (2)
  • (1), (2), (3)
  • (1), (3), (2)
  • (3), (2), (1)
  • (2), (3), (1)

Q11. Consider the Bayesian network for spinal muscular atrophy that we provided in spinalMuscularAtrophyBayesNet.net.

No longer interested in finding out whether his father had cystic fibroisis, James comes to the genetic counselor with another question: Did his father have spinal muscular atrophy? The genetic counselor now wants your help in figuring this out. This time, however, James has other information for you: both he and Robin have spinal muscular atrophy.

What is the probability (NOT a percentage) that Ira had spinal muscular atrophy?

Week 3 Quiz Answers

Quiz 1: Markov Networks

Q1. Factor Scope. Let \phi(c,e)ϕ(c,e) be a factor in a graphical model, where c is a value of C and e is a value of E. What is the scope of \phiϕ?

  • {A, B, C, E}
  • {C, E}
  • {A, C, E}
  • {C}
  • C, D
  • No pair of variables are independent on each other.
  • D, E

Q3. Factorization. Which of the following is a valid Gibbs distribution over this graph?

  • \phi(A, B, C, D, E, F)ϕ(A,B,C,D,E,F)
  • \phi(A) \times \phi(B) \times \phi(C) \times \phi(D) \times \phi(E) \times \phi(F)ϕ(Aϕ(Bϕ(Cϕ(Dϕ(Eϕ(F)
  • \frac{\phi(A, B, D) \times \phi(C, E, F)}{Z}(A,B,Dϕ(C,E,F)​, where ZZ is the partition function
  • \frac{\phi(A) \times \phi(B) \times \phi(C) \times \phi(D) \times \phi(E) \times \phi(F)}{Z}(Aϕ(Bϕ(Cϕ(Dϕ(Eϕ(F)​, where ZZ is the partition function

Q4. Factors in Markov Network. Let \phi(A,B,C)ϕ(A,B,C) be a factor in a probability distribution that factorizes over a Markov network. Which of the following must be true? You may select 1 or more options.

  • A, B, and C form a clique in the network.
  • \phi(a,b,c) \leq 1ϕ(a,b,c)≤1, where a is a value of A, b is a value of B, and c is a value of C.
  • \phi(a,b,c) \geq 0ϕ(a,b,c)≥0, where a is a value of A, b is a value of B, and c is a value of C.
  • A, B, and C do not form a clique in the network.
  • There is no path from A to B, no path from B to C, and no path from A to C in the network.

Quiz 2: Independencies Revisited

Q1. I-Maps. Graph GG (shown below) is a perfect I-map for distribution PP, i.e. \mathcal{I}(G)=\mathcal{I}(P)I(G)=I(P). Which of the other graphs is an I-map (not necessarily a perfect map) for PP?

  • III
  • None of the above
  • II
  • I and III

Q2. I-Equivalence. In the figure below, graph GG is I-equivalent to which other graph(s)?

  • I
  • III
  • None of the above
  • I and III

Q3. *I-Equivalence. Let Bayesian network GG be a simple directed chain X_1 \rightarrow X_2 \rightarrow … \rightarrow X_nX1​→X2​→…→Xn​ for some number nn. How many Bayesian networks are I-equivalent to GG including GG itself?

  • nn
  • n!n!
  • 2^{(n-1)}2(n−1)
  • 2n2n

Week 4

Quiz 1: Decision Theory

Q1. Utility Curves. What does the point marked AA on the YY axis correspond to? (Mark all that apply.)

  • 0.5U($0)+0.5U($1000)
  • U(ℓ)U(ℓ) where ℓℓ is a lottery that pays $0 with probability 0.5 and $1000 with probability 0.5.
  • $500
  • U($500)

Q2. Utility Curves. What does the point marked BB on the YY axis correspond to? (Mark all that apply.)

  • $500
  • U($500)
  • U(ℓ)U(ℓ) where ℓℓ is a lottery that pays $0 with probability 0.5 and $1000 with probability 0.5.
  • 0.5U($0)+0.5U($1000)

Q3. Expected Utility. In the simple influence diagram on the right, with the CPD for MM and the utility function VV, what is the expected utility of the action f^1f1?

  • 5
  • 2
  • 0
  • 20

Q4. *Uninformative Variables. In the influence diagram on the right, what is an appropriate way to have the model account for the fact that if the Test wasn’t performed (t^0)(t0), then the survey is uninformative?

  • Set P(S | M, t^0)P(SM,t0) so that SS takes some new value “not performed” with probability 1.
  • Set P(S | M, t^0)P(SM,t0) to be uniform.
  • Set P(S | M, t^0) = P(S | M, t^1).P(SM,t0)=P(SM,t1).
  • Set P(S | M, t^0)P(SM,t0) so that SS takes the value s^0s0 with probability 1.

Quiz 2: Decision Making PA Quiz

Q1. This quiz is a companion quiz to the Programming Assignment on Decision Making. Please refer to the writeup for the programming assignment for instructions on how to complete this quiz.

We have provided an instantiated influence diagram FullI (complete with a decision rule for D) in the file FullI.mat. What is the expected utility for this influence diagram? Please round to the nearest tenth (i.e., 1 decimal place), do not include commas, and do not write the number in scientific notation.

Enter answer here

Q2. Run ObserveEvidence.m on FullI to account for the following: We have been informed that variable 3 in the model, which models an overall genetic risk for ARVD, has value 2 (indicating the presence of genetic risk factors). Then run SimpleCalcExpectedUtility on the modified influence diagram. What happened to the expected utility? (Hint — ObserveEvidence does not re-normalize the factors so that they are again valid CPDs unless the normalize flag is set to 1. — If you do not use the normalize flag, you can use NormalizeCPDFactors.m to do the normalization.)

  • It substantially decreased.
  • It did not change.
  • It substantially increased.
  • The expected utility might or might not change because there is some randomness in the process for determining the expected utility.

Q3. Why can we explicitly enumerate all the possible decision rules while we often cannot enumerate over all possible CPDs?

  • If there is one choice in a decision rule, at least one choice must have a 0 probability, where in a general CPD, no entries are restricted to having 0 probabilities.
  • In an influence diagram, each decision node cannot have more than 1 parent, while in a general Bayes net, a node can have many parents.
  • All choices have a probability of either 0 or 1, where in a general CPD, choices could take on any value in [0, 1].
  • We can actually always enumerate over all possible CPDs.

Q4. Let a decision node DD take on dd possible values. Let it have mm parents that can each take on nn possible values. How many possible decision rules \delta_Dδ
D

are there?

  • d(n^m)d(nm)
  • d(m^n)d(mn)
  • d^{(n^m)}d(nm)
  • d^{(2n^m)}d(2nm)
  • dnmdnm
  • 2d(n^m)2d(nm)
  • d^{(m^n)}d(mn)

Q5. Consider an influence diagram with 1 decision node DD that can take on dd values. Let DD have mm parents that can each take on nn values. Assume that running sum-product inference takes O(S)O(S) time. What is the run-time complexity of running OptimizeMEU on this influence diagram?

  • O(Sdnm)O(Sdnm)
  • O(S+dn^m)O(S+dnm)
  • O(S+n^m)O(S+nm)
  • O(S+dnm)O(S+dnm)
  • O(Sdn^m)O(Sdnm)
  • O(d^{(n^m)})O(d(nm))
  • O(Sn^m)O(Snm)

Q6. In which of the following situations does it make sense to use OptimizeWithJointUtility instead of OptimizeLinearExpectations?

When the bottleneck in inference is in enumerating the large number of possible assignments to the parents of the utility variables, and each utility variable has a disjoint set of parents.

When there are large factors in the random-variables part of the influence diagram, making inference over the network slow, and there are only a few utility factors, each involving a small number of variables.

When the scopes of the utility factors are large compared to the scopes of the other (random variable) factors.

When every random variable in the network is a parent of at least one other utility factor.

Q7. In the field below, enter the dollar value of the test T1, rounded to the nearest cent (e.g., “1.23” means that you would pay $1.23 for the test; any more than that, and your net utility will be lower than if you didn’t perform any test). Do not precede with the amounts with dollar signs.

Enter answer here

Q8. In the field below, enter the dollar value of the test T2, rounded to the nearest cent (e.g., “1.23” means that you would pay $1.23 for the test; any more than that, and your net utility will be lower than if you didn’t perform any test). Do not precede with the amounts with dollar signs.

Enter answer here

Q9. In the field below, enter the dollar value of the test T3, rounded to the nearest cent (e.g., “1.23” means that you would pay $1.23 for the test; any more than that, and your net utility will be lower than if you didn’t perform any test). Do not precede with the amounts with dollar signs.

Enter answer here

Week 5 Quiz Answers

Quiz 1: Representation Final Exam

Q1. Template Model Representation. Consider the following scenario:

On each campus there are several Computer Science students and several Psychology students (each student belongs to one xor the other group). We have a binary variable LL for whether the campus is large, a binary variable SS for whether the CS student is shy, a binary variable CC for whether the Psychology student likes computers, and a binary variable FF for whether the Computer Science student is friends with the Psychology student. Which of the following plate models can represent this scenario?

  • None of these plate models can represent this scenario
  • (B)
  • (D)
  • (A)

Q2. Partition Function. Which of the following is a use of the partition function?

  • One can divide factor products by the partition function in order to convert them into probabilities.
  • The partition function is the probability of each variable in the graph taking on a specific value.
  • The partition function describes the probability that it is possible to partition the graph into groups of connected variables, where each variable within a group has the same value.
  • The partition function is used only in the context of Bayesian networks, not Markov networks.

Q3. *I-Equivalence. Let TT be any directed tree (not a polytree) over nn nodes, where n \geq 1n≥1. A directed tree is a traditional tree, where each node has at most one parent and there is only one root, i.e., all but one node has exactly one parent. (In a polytree, nodes may have multiple parents.) How many networks (including itself) are I-equivalent to TT?

  • nn
  • n+1n+1
  • n!n!
  • Depends on the specific structure of TT.

Q4. *Markov Network Construction. Consider the unrolled network for the plate model shown below, where we have nn students and mm courses. Assume that we have observed the grade of all students in all courses. In general, what does a pairwise Markov network that is a minimal I-map for the conditional distribution look like? (Hint: the factors in the network are the CPDs reduced by the observed grades. We are interested in modeling the conditional distribution, so we do not need to explicitly include the Grade variables in this new network. Instead, we model their effect by appropriately choosing the factor values in the new network.)

  • A fully connected graph with instantiations of the Difficulty and Intelligence variables.
  • Impossible to tell without more information on the exact grades observed.
  • A fully connected bipartite graph where instantiations of the Difficulty variables are on one side and instantiations of the Intelligence variables are on the other side.
  • A graph over instantiations of the Difficulty variables and instantiations of the Intelligence variables, not necessarily bipartite; there could be edges between different Difficulty variables, and there could also be edges between different Intelligence variables.
  • A bipartite graph where instantiations of the Difficulty variables are on one side and instantiations of the Intelligence variables are on the other side. In general, this graph will not be fully connected.

Q5. Grounded Plates.

Which of the following is a valid grounded model for the plate shown? You may select 1 or more options.

  • (b) — watch out, options are not in order
  • (a) — watch out, options are not in order
  • (c) — watch out, options are not in order

Q6. Independencies in Markov Networks.

Consider the following set of factors: \Phi = \{\Phi_1(A, B), \Phi_2(B, C, D), \Phi_3(D), \Phi_4(C, E, F)\}Φ={Φ1​(A,B),Φ2​(B,C,D),Φ3​(D),Φ4​(C,E,F)}. Now, consider a Markov Network GG such that P_\PhiPΦ​ factorizes over GG. Which of the following is an independence statement that holds in the network? You may select 1 or more options.

  • (A \perp E \mid B)(AEB)
  • (C \perp E \mid B)(CEB)
  • (C \perp D \mid A)(CDA)
  • (B \perp E \mid C)(BEC)
  • (B \perp E \mid A)(BEA)
  • (A \perp F \mid C)(AFC)

Q7. Factorization of Probability Distributions.

Consider a directed graph GG. We construct a new graph G’G′ by removing one edge from GG. Which of the following is always true? You may select 1 or more options.

  • If GG and G’G′ were undirected graphs, the answers to the other options would not change.
  • Any probability distribution PP that factorizes over GG also factorizes over G’G′.
  • Any probability distribution PP that factorizes over G’G′ also factorizes over GG.
  • No probability distribution PP that factorizes over GG also factorizes over G’G′.

Q8. Template Model in CRF.

The CRF model for OCR with only singleton and pairwise potentials that you played around with in PA3 and PA7 is an instance of a template model, with variables C_1,\ldots,C_nC1​,…,Cn​ over the characters and observed images I_1,\ldots,I_nI1​,…,In​. The model we used is a template model in that the singleton potentials are replicated across different C_iCi​ variables, and the pairwise potentials are replicated across character pairs. The structure of the model is shown below:

Now consider the advantages of this particular template model for the OCR task, as compared to a non-template model that has the same structure, but where there are distinct singleton potentials for each C_iCi​ variable, and distinct potentials for each pair of characters. Which of the following about the advantage of using a template model is true? You may select 1 or more options.

  • The same template model can be used for words of different lengths.
  • The template model can incorporate position-specific features, e.g. q-u occurs more frequently at the beginning of a word, while a non-template model cannot.
  • The inference is significantly faster with the template model.
  • Parameter sharing could make the model less susceptible to over-fitting when there is less training data.

Conclusion

Hopefully, this article will be useful for you to find all the Week, final assessment, and Peer Graded Assessment Answers of Probabilistic Graphical Models 1: Representation Quiz of Coursera and grab some premium knowledge with less effort. If this article really helped you in any way then make sure to share it with your friends on social media and let them also know about this amazing training. You can also check out our other course Answers. So, be with us guys we will share a lot more free courses and their exam/quiz solutions also, and follow our Techno-RJ Blog for more updates.

Leave a Comment

Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.

Powered By
Best Wordpress Adblock Detecting Plugin | CHP Adblock