tag:blogger.com,1999:blog-158415766153663860.post8371004791532258548..comments2016-09-28T00:34:57.169-07:00Comments on { "blog" : "brusic" }: Machine Learning Ex2 - Linear RegressionIvan Brusichttp://www.blogger.com/profile/02435401670390574376noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-158415766153663860.post-782078059401718922011-10-23T10:30:48.106-07:002011-10-23T10:30:48.106-07:00Oh, I just see that the vectorization has been per...Oh, I just see that the vectorization has been performed, so the only additional thing is to move as much as possible outside the loop, i.e. reducing it to what can be written in Octave as theta = invariant * [1 ; theta] . The challenge is to precompute the invariant matrix before the loop such that it is equivalent with your version. I'm not publishing it because it would allow students to cheat on the HW. <br /><br />Btw. since the homeworks have no hard deadlines, arguably publishing a solution in another language can be considered questionable, because a student may translate it back to Matlab and submit it as his homework.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-158415766153663860.post-13126223273120559732011-10-23T10:24:00.387-07:002011-10-23T10:24:00.387-07:00It would be even nicer to not do mapping but perfo...It would be even nicer to not do mapping but perform matrix calculations (i.e. vectorized). In this case, the solution can be reduced to:<br />1. calculation of a loop-invariant matrix outside the gradient descent loop (obviously before the loop as the matrix will be needed in the loop)<br />2. the gradient descent loop (and this is the only iteration construct needed<br />3. within the loop, you keep multiplying essentially the theta vector (more specifically the theta vector extended with an additional value 1) with your matrix (calculated outside the gradient descent loop).<br /><br />The nice thing is that it works for an arbitrary number of variables and theta values, and of course is much faster.Anonymousnoreply@blogger.com