Questions tagged [perceptron]

Perceptron is a basic linear classifier that outputs binary labels.

Perceptron is a basic linear classifier that outputs binary labels. If the training data set is not linear separable, the learning algorithm cannot converge.

A classical problem of XOR is a dataset that is not linear separable. A perceptron does not work in this case. By adding nonlinear layers between the input and output, one can separate all data. With enough training data, the resulting network is able to model any well-defined function to arbitrary precision. This model is a generalization known as a multilayer perceptron.

For more details about , see wiki.

440 questions
4
votes
3 answers

Perceptron learning - most important feature

For one of my assignments in my AI class we were tasked with creating a perceptron learning implementation of the Widrow Hoff delta rule. I've coded this implementation in java: The following github link contains the…
4
votes
3 answers

Intuition about the kernel trick in machine learning

I have successfully implemented a kernel perceptron classifier, that uses an RBF kernel. I understand that the kernel trick maps features to a higher dimension so that a linear hyperplane can be constructed to separate the points. For example, if…
rahulm
  • 724
  • 5
  • 10
4
votes
2 answers

Relation between perceptron accuracy and epoch

Is it possible that the accuracy of perceptron decreases as I go through the training more times? In this case, I use the same training set several times.
4
votes
1 answer

Activation function for multilayer perceptron

I have tried to train simple backpropagation neural network with the xor function. When I use tanh(x) as activation function, with the derivative 1-tanh(x)^2, I get the right result after about 1000 iterations. However, when I use g(x) =…
user1767774
  • 1,585
  • 1
  • 21
  • 29
3
votes
1 answer

Why can my perceptron not separate perfectly a number of points that are less than the number of features?

I am quite new in machine learning and decided that a good way to start getting some experience would be to play around with some real data bases and the python scikit library. I used haberman's surgery data, a binary classification task, that can…
3
votes
1 answer

How to fit perceptron scikit-learn with the svhn dataset cropped?

I have to classificate the svhn dataset with the Perceptron on scikit-learn libray in python, but i don't understand why the accuracy score is very low(21%); the dataset is the svhn dataset cropped image format and i have to pass the image in…
3
votes
1 answer

How to manage connections of neurons inside ANN using OpenCV::ML?

So there is grate sample (only one real sample we found). And it is quite limiting. It shows how to create an architecture of artificial neutral network where all neurons of one layer are connected (forward) to all neurons of following (next) layer.…
Rella
  • 59,216
  • 102
  • 341
  • 614
3
votes
1 answer

How do you properly implement and verify a perceptron & gradient descent to learn basic Boolean functions in Haskell?

I'm trying to construct a perceptron unit in Haskell to learn the Boolean And and Or functions, as in Mitchell's book. I'm pretty sure my gradient descent is correct, but I'm struggling to verify whether it actually learns the alrogithm. I have…
user5775230
3
votes
1 answer

XOR neural network, the losses don't go down

I'm using Mxnet to train a XOR neural network, but the losses don't go down, they are always above 0.5. Below is my code in Mxnet 1.1.0; Python 3.6; OS X El Capitan 10.11.6 I tried 2 loss functions - squared loss and softmax loss, both didn't…
Kun Hu
  • 407
  • 5
  • 11
3
votes
1 answer

Perceptron Javascript Inconsistencies

Building a basic Perceptron. My results after training are very inconsistent, even after 1000's of epochs. The weights seem to adjust properly, however the model fails to accurately predict. A second pairs of eyes on the structure would be greatly…
3
votes
4 answers

Unable to Learn XOR Representation using 2 layers of Multi-Layered Perceptron (MLP)

Using PyTorch nn.Sequential model, I'm unable to learn all four representation of the XOR booleans: import numpy as np import torch from torch import nn from torch.autograd import Variable from torch import FloatTensor from torch import…
alvas
  • 94,813
  • 90
  • 365
  • 641
3
votes
2 answers

TensorFlow: Implementing Single layer perceptron / Multi layer perceptron using own data set

I am new to TensorFlow. I looked for examples on implementation of multi layer perceptron using tensorflow, but i am getting examples only on MNIST image data sets, apart from MNIST can i able to build the Neural Network model using same…
3
votes
2 answers

Neural Networks normalizing output data

I have a training data for NN along with expected outputs. Each input is 10 dimensional vector and has 1 expected output.I have normalised the training data using Gaussian but I don't know how to normalise the outputs since it only has single…
PRCube
  • 406
  • 1
  • 6
  • 17
3
votes
2 answers

NLTK perceptron tagger "TypeError: 'LazySubsequence' object does not support item assignment"

I would like to try and use the PerceptronTagger in the nltk package for Python 3.5, But I am getting the error TypeError: 'LazySubsequence' object does not support item assignment I would like to train it with data from the brown corpus with the…
Nathan McCoy
  • 2,380
  • 1
  • 18
  • 35
3
votes
2 answers

Neural Network: A Perceptron for guessing the position of a point relative to a function

I am building a simple Perceptron with 3 inputs (x, y, bias=1) He must guess whether the given point (x,y) is under or below a given function. Basically, it's was inspired by this article A supervised model of learning is used to train the network…
Denis Rozimovschii
  • 398
  • 1
  • 5
  • 17
1 2
3
29 30