I understand the following:
In 2D space, each data point has 2 features: x and y. The weight vector in 2D space contains 3 values [bias, w0, w1] which can be rewritten as [w0,w1,w2]. Each datapoint needs an artificial coordinate [1, x, y] for the purposes of calculating the dot product between it and the weights vector.
The learning rule used to update the weights vector for each misclassfied point is w := w + yn * xn
My question is: how do you derive two points from the weight vector w = [A, B, C] in order to graph the decision boundary?
I understand A + Bx + Cy = 0 is the linear equation in general form (A, B, C are can be taken from the weights vector) but I don't know how to plot it.
Thanks in advance.