Protect your work from others using a Python script!
Dare to Look at My Work!
Protect your work from others using a Python script!Did you ever feel that someone is peeping to your computer screen? Obviously you don't have NASA's secret data on your computer but you don't like other seeing your work or drawings or most importantly research materials. I've written a python code that can identify it's owner, sees how many faces are looking at the computer screen. If any unauthorized face has been detected then it locks the computer screen automatically so you don't need to worry a bit! just concentrate on your work!!
Lets Hop in guys:
To recognize faces one would need 3 differen scripts-
1. Data Set creator (takes photos and saves them in a folder)
2. Trainer (Train the model with all those saved photos)
3. Recognizer (final script that will recognize faces based on Train data)
To make it simple and user friendly I have written a single script with GUI that will do all those 3 works in a user friendly way with buttons text boxes etc.
This is a python script. I assume you have installed python on your computer. Current latest version is 3.7.2.
Also you need to install OpenCv face recognition library first. download from here.Then you'll need to pip install pyautogui, pillow and numpy.
Install them via python package installer (PIP) like the image Numpy and Pillow. Time comes with python built in.
Face Recognition Mandatory Steps:
To start recognizing faces you first need to Take images to create Data Set.
Data set is a set where you save the images by which you want to Train your model (The program). So you got it Training your model is the second Step. To detect faces there's some classifier in OpenCv, we'll use HaarCascade_Classifier_frontalFace / al2. xml file. After downloading opencv go to the folder then find goto sources>data>HaarCascade and then you'll get some.xml files. You may also use one of these to recognize license plates, eyes etc. Code is the same (Almost).
Trainer is another code which trains the model by the data save on the data set folder. After training it creates a.yml file the folder. The more data you feed to train your model more accurate your model will be. So feel free to take a lot of samples. Not a lot of photos of one person but photos of different person's. That way it will be able to distinguish people.
The next code that will distinguish or recognize a person is the Recognizer code.
Recognizer code is the code which acts based on the trained data. So this code actually says who is who.
But You Said Single Script Ashraf!
Yes I did and that's why I have written a single python script with pyautogui (GUI stands for Graphical User Interface) so that you can use it in a user friendly way with some buttons and text boxes.
download code from github or copy from below.
"""ChikonEye literally watches your back.preventing others who peeps your computer from seeing your valuable secret works.Now your works are safe and secure.Chikon eye uses your laptop(primary = 0 or secondary = 1, 2 so on) camera to see howmany people are watching at the computer screen. If some1 unauthorized tries to seeit automatically locks the computer screen.developed by Ashraf Minhajmail me at- ashraf_minhaj@yahoo.com""""""Version: 2.0 (all codes in one .py file).Contains: 1. Dataset Creator (take photo and create a dataset)2. Tariner (training your model)3. Recognizer (the code that recognizes you)4. Amazingly with GUI support (makes this code more user friendly)""""""I'll make and executable .exe file so that this can be run on any computer.right now it can be used by the people who has python, numpy, opencv pyautoguiinstalled in their pc. Don't worry exe is coming soon."""import cv2import numpy as npimport pyautoguifrom time import sleepfrom PIL import Image #pip install packagesimport os#location of opencv haarcascade <change according to your file location>face_cascade = cv2.CascadeClassifier('F:\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt2.xml')cap = cv2.VideoCapture(0) # 0 = main camera , 1 = extra connected webcam and so on.rec = cv2.face.LBPHFaceRecognizer_create()#the path where the code is savedpathz = "C:\\Users\\HP\\cv_practice\\chikon" #Change this#recogizer moduledef recog():""" Recognizes people from the pretrained .yml file """#print("Starting")rec.read(f"{pathz}\\chikoneye.yml") #yml file location <change as yours>id = 0 #set id variable to zerofont = cv2.FONT_HERSHEY_COMPLEXcol = (255, 0, 0)strk = 2while True: #This is a forever loopret, frame = cap.read() #Capture frame by framegray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #change color from BGR to Grayfaces = face_cascade.detectMultiScale(gray, scaleFactor = 1.5, minNeighbors = 5)#print(faces)for(x, y, w, h) in faces:#print(x, y, w, h)roi_gray = gray[y: y+h, x: x+w] #region of interest is face#*** Drawing Rectangle ***color = (255, 0, 0)stroke = 2end_cord_x = x+wend_cord_y = y+hcv2.rectangle(frame, (x,y), (end_cord_x, end_cord_y), color, stroke)#***detectid, conf = rec.predict(roi_gray)#cv2.putText(np.array(roi_gray), str(id), font, 1, col, strk)print(id) #prints the id's#if sees unauthorized personif id != 1:#execute lock commandpyautogui.hotkey('win', 'r') #win + run key combopyautogui.typewrite("cmd\n") # type cmd and 'Enter'= '\n'sleep(0.500) #a bit delay <needed!>#windows lock code to command prompt and hit 'Enter'pyautogui.typewrite("rundll32.exe user32.dll, LockWorkStation\n")"""elif id == 1: #if authorized person (me & my Brother Siam)print("Authorized Person\n") #do nothing"""cv2.imshow('ChikonEye', frame)#check if user wants to quit the program (pressing 'q')if cv2.waitKey(10) == ord('q'):op = pyautogui.confirm("Close the Program 'ChikonEye'?")if op == 'OK':print("Out")breakcap.release()cv2.destroyAllWindows() #remove all windows we have created#create dataset and train the modeldef data_Train():sampleNum = 0#print("Starting training")id = pyautogui.prompt(text="""Enter User ID.\n\nnote: numeric data only 1 2 3 etc.""", title='ChikonEye', default='none')#check for user input"""if id > :print(id)pyautogui.alert(text='WRONG INPUT',title='ChikonEye',button='Back')recog()"""#if user input is 1 2 or 3if int(id) > 0:pyautogui.alert(text='WRONG INPUT',title='ChikonEye',button='Back')else:#let, the input is okaywhile True:ret, img = cap.read()gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)faces = face_cascade.detectMultiScale(gray, 1.3, 5)for(x, y, w, h) in faces: #find facessampleNum = sampleNum + 1 #increment sample num till 21cv2.imwrite(f'{pathz}\\dataSet\\User.{id}.{sampleNum}.jpg', gray[y: y+h, x: x+w]) #uncomment thiscv2.rectangle(img, (x,y), (x+w, y+h), (255,0,0), 4)cv2.waitKey(100)cv2.imshow('faces', img) #show image while capturingcv2.waitKey(1)if(sampleNum > 20): #21 sample is collectedbreaktrainer() #Train the model based on new imagesrecog() #start recognizing#Trainerdef trainer():faces = [] #empty list for facesIds = [] #empty list for IDspath = (f'{pathz}\\dataSet')#gets image id with pathdef getImageWithID(path):imagePaths = [os.path.join(path,f) for f in os.listdir(path)]#print(f"{imagePaths}\n")for imagePath in imagePaths:faceImg = Image.open(imagePath).convert('L')#cv2.imshow('faceImg', faceImg)faceNp = np.array(faceImg, 'uint8')ID = int(os.path.split(imagePath)[-1].split('.')[1])#print(ID)faces.append(faceNp)Ids.append(ID)cv2.waitKey(10)return Ids, facesids, faces = getImageWithID(path)print(ids, faces)rec.train(faces, np.array(ids))#create a yml file at the folder. WIll be created automatically.rec.save(f'{pathz}\\chikoneye.yml')pyautogui.alert("Done Saving.\nPress OK to continue")cv2.destroyAllWindows()#Options checkingopt =pyautogui.confirm(text= 'Chose an option', title='ChikonEye', buttons=['START', 'Train', 'Exit'])if opt == 'START':#print("Starting the app")recog()if opt == 'Train':opt = pyautogui.confirm(text="""Please look at the Webcam.\nTurn your head a little while capturing.\nPlease add just one face at a time.\nClick 'Ready' when you're ready.""", title='ChikonEye', buttons=['Ready', 'Cancel'])if opt == 'Ready':#print("Starting image capture + Training")data_Train()if opt == 'Cancel':print("Cancelled")recog()if opt == 'Exit':print("Quit the app")
Create a DataSet folder in which folder you have saved the code. Best will be to download the zip file from github. The.yml file will automatically be saved by the program.
Run this in powershell or command prompt.
If you start it th first time it will not recognize anyone. So please make sure to train the model by pressing train then other things are pretty easy.
|| Obviously I don't have rocket science secret data in my computer but at least this can be used for fun with friends and that'll save your privacy sometimes too. ||
Thank you.
Comments
Post a Comment