13

In this code, I'm using Python 2.7.13, OpenCV 2.4.13 and PyAutoGUI 0.9.36. The objective is to move the cursor according to the facial movement, but the cursor movement is inverted. For example, if my face goes to right, the cursor moves to left and if my face goes to left, the cursor goes to right. Also, I want the cursor to move right, left, up and down in the whole screen of my PC, whose size is x=1920, y=1080.

The purpose of this program is to show that it is possible to get a new way to acquire more independence and access so that people with tetraplegia are capable of doing the simple activities, which are part of the routine of millions of individuals, such as turning the light on and off and turning TV on and off.

import cv2
import pyautogui

faceCascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

video_capture = cv2.VideoCapture(0)

while True:
    # Capture frame-by-frame
    ret, frame = video_capture.read()

    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    faces = faceCascade.detectMultiScale(
        gray,
        scaleFactor=1.3,
        minNeighbors=5,
        minSize=(80, 80),
        flags=cv2.cv.CV_HAAR_SCALE_IMAGE
    )

    #print 'faces: ', faces

    # Draw a rectangle around the faces
    for (x, y, w, h) in faces:
        cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 0, 255), 3)

    #width, height = pyautogui.size()
    #cursorx, cursory = pyautogui.position()
    #posx = width - cursorx
    #posy = cursory
    pyautogui.moveTo(x+w, y+h)

    # Display the resulting frame
    #cv2.imshow('Video', frame)
    rimg = cv2.flip(frame,1) #invert the object frame
    cv2.imshow("vertical flip", rimg) 

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
TigerC
  • 151
  • 6
  • what happens if you invert the frame right after you get it. instead of doing rimg just to show it. Also if you resize frame to your screen size wouldn't it solve the other problem? – ROAR Aug 19 '17 at 06:39
  • There's another thing you can do. although I think you still have to flip the image before detecting face on it. But you don't need to resize it (I don't think even with resize you could get to the edges of the screen.) You can detect if the face is in the center area of the frame and if it's not move cursor accordingly (every x milliseconds). If face is further away from center area, make mouse move faster towards that direction, and if it's closer move it slower. if center(ish) stop mouse movement. – ROAR Aug 19 '17 at 06:59
  • First, can you calify movement of the head or movement of the facial expressions (eyebrow, eye etc) ? I would suggest that movement of the head would be excluded from program to avoid overstimulating inner ear ( body's gyroscope ) by sudden movement of circumvention, addaction, twisting, and subduction. Could you please generate your mouse tracking data ? Just print out your X,Y coordinates while you move mouse up,down,left,right and edit your post with that data. – Danilo Aug 31 '17 at 08:27

2 Answers2

4

That's a nice thing you are doing.

To only fix the mouse movement, you can subtract the x,y motion from the screen size. But then spanning it to the whole screen with pyautogui.moveTo(x,y) would be very inaccurate and noisy. Instead for smoother you can use

pyautogui.moveRel(None, steps)

With that said, If you are using face cascade in the first place, it would be very difficult to move the face for corresponding mouse motion. Using face orientation like tilting left, or right would be better i'd say.

In the below code, I used eye cascades for left right motion. So tilting the face a little would be enough for motion. I worked on OpenCV 3.2 so make necessary changes accordingly for your version if needed.

CODE

import numpy as np
import cv2
import pyautogui

right = cv2.CascadeClassifier('haarcascade_righteye_2splits.xml')
left = cv2.CascadeClassifier('haarcascade_lefteye_2splits.xml')
smile = cv2.CascadeClassifier('haarcascade_smile.xml')

cam=cv2.VideoCapture(0)

blank=np.zeros((480,848,3),dtype=np.uint8)  # Change this correctly to size of your image frame
fix=0 

print "press y to set reference box for y motion" #set a reference initially for y motion

while(cam.isOpened()):


        ret,img = cam.read()
        r=0
        l=0
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        r_eye= right.detectMultiScale(gray, 1.9, 9)
        l_eye= left.detectMultiScale(gray, 1.9, 9)  #Change these values according to face distance from screen

        for (rx,ry,rw,rh) in r_eye:
                cv2.rectangle(img,(rx,ry),(rx+rw,ry+rh),(255,255,0),2)
                r_c=(rx+rw/2,ry+rh/2)
                r=1

        for (lx,ly,lw,lh) in l_eye:          
                cv2.rectangle(img,(lx,ly),(lx+lw,ly+lh),(0,255,255),2)
                l_c=(lx+lw/2,ly+lh/2)
                l=1

        if(r*l):

            if(l_c[0]-r_c[0]>50):
                cv2.line(img,r_c,l_c,(0,0,255),4)
                mid=((r_c[0]+l_c[0])/2,(r_c[1]+l_c[1])/2)
                cv2.circle(img,mid,2,(85,25,100),2)
                if(fix==1):                        # Change this part of code according to what you want
                                                   # for motion along y direction
                    if( mid[1]<one[1]):
                        pyautogui.moveRel(None, -15)
                    if(mid[1]>two[1]):
                        pyautogui.moveRel(None, 15)

                if(cv2.waitKey(1))== ord('y'):
                        blank=np.zeros_like(img)
                        one=(mid[0]-60,r_c[1]-7)   # Change the Value 60,7 to change box dimentions
                        two=(mid[0]+60,l_c[1]+7)   # Change the Value 60,7 to change box dimentions
                        cv2.rectangle(blank,one,two,(50,95,100),2)
                        fix=1


        elif(r) :   pyautogui.moveRel(-30, None)   # Change the Value and Sign to change speed and direction

        elif (l):   pyautogui.moveRel(30, None)    # Change the Value and Sign to change speed and direction



        img=cv2.bitwise_or(img,blank)
        cv2.imshow('img',img)
        if(cv2.waitKey(1))==27:break

cv2.destroyAllWindows()

In the code, you need to press y to set a box for reference for y motion. Out of box, with both the eyes will be the motion.

We can add a smile cascade for mouse click but that is little inaccurate and slow for now. Need to figure out better options like an eye click or something.
This is a very basic code to get things working. Tagging in neural networks for face expressions might be lot better, but again speed is a factor.

I.Newton
  • 1,635
  • 1
  • 7
  • 14
2

If you know the screen size, just subtract what you have now from the screen size to get the cursor on the opposite side. For example:

pyautogui.moveTo(1920 - (x+w), 1080 - (y+h))

If x+w was getting you a screen position of 2 (left of screen), it would now get you a screen position of 1918 (right of screen)