0

My friend and I just discussed about how to spam my page the best way. He writes a Java-Program which changes the proxy and sets the user agent new with every call on my site. Then we asked us, if it is somehow possible to prevent a website from accesses which are done by a non-real visitor. Is this possible? How?

Florian Müller
  • 6,344
  • 25
  • 70
  • 114
  • You might want to include some kind of completely automated public Turing-test to tell computers and humans apart. – CodeCaster Nov 21 '11 at 11:45
  • Have a look at these similar questions; http://stackoverflow.com/questions/1577918/blocking-comment-spam-without-using-captcha http://stackoverflow.com/questions/3615595/help-dealing-with-spam-logic http://stackoverflow.com/questions/3223937/preventing-spam – Qwerky Nov 21 '11 at 11:52
  • 1
    A method is to log ips within a table in your db, that dont download specific things, like the css/js files or a 1x1px image with random name,then every now and then add persistent offenders to a .htaccess redirecting them to google... a bot will not download these it will only fetch the html – Lawrence Cherone Nov 21 '11 at 13:16

2 Answers2

2

is (it) somehow possible to prevent a website from accesses which are done by a non-real visitor

No. Servers are interfacing with other machines, not "real visitors". So they never can know. It's just that they are machines and we are humans.

You can try to make your application smarter on guessing whether a request to it was originally triggered by a human or not, however strictly spoken this is not possible in an automated fashion, you will always have defects / gaps in the detection.

hakre
  • 178,314
  • 47
  • 389
  • 754
1

If it's simply filling in a comment or something you might want to check out

  • Akismet service
  • Captcha
  • Disqus integration
Kris van der Mast
  • 15,905
  • 7
  • 35
  • 57
  • Can also try DNSBL - it can block 90% of the proxies with almost no false-positives if the blacklists are carefully selected. – Narf Nov 21 '11 at 11:59