this is a question on a sample test that my professor did not give the
solutions to, and i am studying for the exam that is later this week,
please let me know if you can solve this question. the class is a
neural networks class.
the question has two parts.
Given:
for the tansig activation function f, y = f(x), the derivative can be
found from df(x) / dx = g0(y) = 1 - y^2. A neural network has one
layer with two inputs, a weight matrix W (with no biases) and two
output tansig neurons.
Questions:
a) For this network, by attempting to minimize the squared output error
e = (t - a)^T * (t - a) for an exemplar pair (p, t), develop an
equation in terms of t and p, using the result above for the
derivative of the transig, for updating the weight matrix W(k) from k
= 0 to k = 1 while assuming
W(0) = I (I is the identity matrix) and using the equation
W^m(k + 1) = W^m(k) - (alpha)*s^m(k)(a^(m - 1)(k))^T basically find
W(1) in terms of p, t, and alpha.
b)
Carry out one cycle of this weight update in terms of the parameter
alpha for the exemplar pair
(p, t) = ([ 1/4 ], [-1/2])
([-3/8 ], [ 1/8])
p, and t are 2 x 1 matrixes with the values shown above, it's hard to
draw the matrix in this text box, but hopefully you get the idea.
thanks for your help.
if you can give me the answer by tomorrow i'll pay $40.00 |