Azure Bootcamp and Raspberry Jam 2019, Boca Raton

We are excited to welcome you to this unique community event! Join us and learn to build a face recognition IoT device using a Raspberry Pi Zero W, a camera, and the Microsoft Azure platform!

Hardware and Installation

To take part in the Raspberry Jam event you will need the following items (we have pre-purchased and configured 10 Raspberry Pis, available on a first-come first-serve basis, but you can bring your own):


Follow these instructions if you are purchasing your own Raspberry Pi kit:

  • Install the Raspbian Operating System on the SIM card
    FOLLOW THESE INSTRUCTIONS ON HOW TO DO JUST THAT.
  • Mount the Raspbery Pi with the camera
    Use the short camera cable, and do not forget to insert the SIM card
    If you do not use a monitor or keyboard to run your Pi, see these instructions for a headless installation
    If the camera has a protecting film on the lense, remove it before mounting the camera
    Make sure the cables are inserted properly and are snug
    NOTE: mount the camera slowly, and do not press hard or the mounting cable may dismount

Initial Configuration

Install Putty on your laptop if you do not already have it installed. For instructions on installing Putty for Windows, see this page.

Connect to the Raspberry Pi using SSH
If you used a headless installation, your Pi should already be connected to your network; if not, you will need to use an HDMI monitor and a keyboard to connect your Pi to the network, and obtain the IP Address of your Pi.

If you are bringing your own Pi to the event, you will need to configure the Pi for the lab:
  • Start the Pi and login interactively (you will need a monitor and a keyboard)
  • Click on the Preferences menu, and select Raspberry Pi Configuration
  • On the Interfaces tab, select Enabled next to Camera
  • On the Interfaces tab, select Enabled next to SSH
  • On the Interfaces tab, select Enabled next to VNC
  • Click OK, and then Yes when prompted to reboot


If you are using a Pi provided by the event, the Pi will connect to the local WiFi provided by the venue. In order to SSH to the Pi, you will need to connect to the local WiFi. The Pi was already pre-configured for you; the default password was not changed.

Ask for a proctor to assist you with network connectivity


  • Start the SSH utility (Putty)
  • Enter the IP Address of your Pi
  • Click Open
  • Use the default pi credentials if you haven't changed them

Install the VNC Client
This lab uses the VNC Client to access the Raspberry Pi user interface from your laptop.
You can download it here.

Lab Overview

This lab demonstrates how to call the Azure Cognitive Services from a Raspberry Pi Zero W and the Python programming language. In addition, the lab uses a shared Azure Service Bus to turn on a light that the event organizers will provide and configure.

Lab Part 1: Signup for Necessary Services

First, you need to sign up to the Microsoft Azure service.

More detailed instructions will be provided separately.

If you received a code by a proctor to try Azure for free, visit aka.ms/joinedu to redeem your code.

  • Add an Azure Cognitive service
    You will need to note the Cognitive service URI and the service Key assigned to you

  • OPTIONAL: Add a Service Bus service, and create a Queue called 'testqueue'
    This step is not required since the lab uses a shared Azure Bus Queue, defined in Enzo Online; however if you want to run this lab after the event, you will need to create the Azure Service Bus in your Azure account, open a free Enzo Online account, and register the Azure Bus Queue in Enzo.

    The configuration values for the call to the Azure Bus will be provided by a proctor.

Lab Part 2: Test the Camera

By default all the necessary software should be installed and ready to be used. The objective of this step is to confirm that the camera hardware has been properly installed. Make sure you place the Pi in a vertical position, with the Raspberry Pi logo facing down, before taking a picture so that the image looks right side up.

  • Using the VNC client, start a command prompt (click on the Terminal icon)
  • Type the following command:
    raspistill -v -o test.jpg
  • Look at the camera from a couple of feet away and wait a few seconds (5 seconds)
  • Inspect the test.jpg file to see your picture (found under /home/pi/)


If you are getting an error your camera is probably not installed correctly. This test should succeed before you proceed.


Lab Part 3: Call Azure Cognitive Services using Python

Now that you are able to connect to your Raspberry Pi, let's start coding. We will be using the VNC Viewer for this purpose.

If you would rather code using SSH, simply use Putty to start an SSH session


Create an Enzo directory
Let's start by creating an enzo directory under the /home/pi/ directory. The name is case-sensitive. This directory will hold the source code and the last image captured by the camera. If you obtained your Pi through an event organizer, the directory is most likely already there.

Create a lab.py file and run it
  • Navigate to the Enzo folder and create a new empty lab.py file.
  • Double-click on the file to open the Geany editor.
  • Type, or copy and paste the Python code below into the file.
  • Replace the subscription_key value with your Cognitive Service key.
  • Verify that the uri_base variable points to the correct endpoint for your Cognitive Service.
  • Run the program, then place your hand within a couple of feet of the camera.
  • A message saying A hand was detected should appear on your screen.

                                    
#
#   LAB - CAPTURE IMAGE, SEND TO AZURE BUS IF HAND DETECTED AND SEND SMS MESSAGE
#
#   This lab provides you the basic foundation for sending messages to Enzo, and shows
#   you how to obtain an image from a camera added to the Raspberry Pi. 
#
#
#   You must set these variables before running the lab:
#       subscription_key
#       enzoguid
#       phones
#

import http.client, urllib.request, urllib.parse, urllib.error, base64, json, uuid
from picamera import PiCamera
from time import sleep
import time 
import requests
import sys 

#cognitive settings
subscription_key = "ENTER_THE_AZURE_COGNITIVE_KEY"
uri_base = 'southcentralus.api.cognitive.microsoft.com'
analyze_uri = "/vision/v1.0/analyze?%s"

#other settings
fileName = "/home/pi/enzo/image.jpg"
headers = dict()
headers['Ocp-Apim-Subscription-Key'] = subscription_key
headers['Content-Type'] = "application/octet-stream"

lastValue = False 
camera = PiCamera()

def capturePicture(fipathToFileInDiskle):
    camera.capture(fipathToFileInDiskle) 

def isDetected(pathToFileInDisk, headers, keyword):
    params= urllib.parse.urlencode({
        "visualFeatures": "Categories,Description",
        "launguage": "en",
        })
    
    with open( pathToFileInDisk, "rb" ) as f:
        inputdata = f.read()
    body = inputdata
    iswarning = False
        
    try:
        conn = http.client.HTTPSConnection(uri_base)
        conn.request("POST", analyze_uri % params, body, headers)
        response = conn.getresponse()
        data = response.read().decode('utf-8')                 
        #print(data)
        parsed = json.loads(data)
        tags = (parsed['description']['tags'])
        #print ("response:") 

        for c in tags:
            #print(c)
            if c == keyword:
                iswarning = True
                break 

        if iswarning:
            print(keyword + ' DETECTED') 

        conn.close() 

    except Exception as e: 
            print('Error:')
            print(e) 

    return iswarning

# Main code starts here print('starting...') 
print('starting...')

while True:
    try:
        print("calling camera...")
        capturePicture(fileName)
        print("calling Azure Cognitive service...")
        iswarning = isDetected(fileName, headers, 'hand')
        if lastValue != iswarning:
            lastValue = iswarning
            if iswarning:
                print("A hand was detected!")
    except:
        print("Error:", sys.exc_info()[0]) 

    sleep(2) 
                                    

Lab Part 4: Send a Message to an Azure Bus Queue using Enzo

In this section, you will be sending a message to an Azure Bus Queue by making a call to an Enzo HTTPS endpoint. During the lab, the message will be sent to a shared Enzo account, so that messages from all the devices at the event are sent to the same queue. Another application is listening on the Azure Service Bus queue and a light will turn on as soon as a message is received.

Using Enzo to send a message makes it simpler to program with Azure since no SDK is required; it is also more secure since you are not sharing the actual service keys.




  • Declare Variables for Enzo

    First, let's add a few variables on top of our code.
    Note: Two extra parameters are added below to show you how to call an Azure Bus Topic instead of an Azure Bus Queue.

    
    #enzo settings
    enzoguid="ENTER_YOUR_ENZO_AUTHTOKEN"  # Ask a proctor for this information
    busconfig = 'ENTER_YOUR_ENZO_CONFIG_NAME'  # Ask a proctor for this information
    
    enzourl="https://daas001.enzounified.com" #saving cloud data through enzo
    enzourl_bus_queue= enzourl + '/bsc/azurebus/sendmessagetoqueue' #send a message to an azure bus Queue
    enzourl_bus_topic= enzourl + '/bsc/azurebus/sendmessagetotopic' #send a message to an azure bus Topic
    topicName = 'events'
    queueName = 'testqueue'
    


  • Add a method that sends an HTTP request to Enzo

    Let's create a method for the call to Enzo. In it, we create an HTTP header with the required parameters to call Enzo. The call to Enzo expects an authentication Guid, an Enzo configuration name (the configuration settings of the Azure Bus) the message to send, the name of the queue, and an optional label.

    Then the code performs an HTTP Post to an Enzo URI endpoint, with the name of the Azure Bus method to call. To send a message to a queue we are calling the /bsc/azurebus/sendmessagetoqueue Enzo method. To learn more about the Azure Bus methods available, and how to call Enzo using HTTP, see this reference documentation.

    
    #METHOD THAT SENDS A MESSAGE TO AN AZURE BUS QUEUE
    def sendToBusQueue(message):
        print('** AZURE BUS CALL **')
        iotheaders={'authToken':enzoguid,
                    '_config': busconfig,
                    'body': message,
                    'name': queueName,
                    'label': 'SecurityApp'}
        try:
            response=requests.post(enzourl_bus_queue, headers=iotheaders)
            print(response)
        except:
            print("Error Bus:", sys.exc_info()[0]) 
    


  • Make the call from the main routine

    Last but not least, let's call Enzo if we detect a hand. To do that, add a call to the method we just created in the appropriate place.

    
            ...
            if iswarning:
                    print('A hand was detected')
                    sendToBusQueue('HAND')      # Add this line
    

                                    
#
#   LAB - CAPTURE IMAGE, SEND TO AZURE BUS IF HAND DETECTED AND SEND SMS MESSAGE
#
#   This lab provides you the basic foundation for sending messages to Enzo, and shows
#   you how to obtain an image from a camera added to the Raspberry Pi. 
#
#
#   You must set these variables before running the lab:
#       subscription_key
#       enzoguid
#       phones
#

import http.client, urllib.request, urllib.parse, urllib.error, base64, json, uuid
from picamera import PiCamera
from time import sleep
import time 
import requests
import sys 

#cognitive settings
subscription_key = "ENTER_THE_AZURE_COGNITIVE_KEY"
uri_base = 'southcentralus.api.cognitive.microsoft.com'
analyze_uri = "/vision/v1.0/analyze?%s"

#enzo settings
enzoguid="ENTER_YOUR_ENZO_AUTHTOKEN"   #Ask a proctor for this information
busconfig = 'ENTER_YOUR_ENZO_CONFIG_NAME'   #Ask a proctor for this information

enzourl="https://daas001.enzounified.com" #saving cloud data through enzo
enzourl_bus_queue= enzourl + '/bsc/azurebus/sendmessagetoqueue' #send a message to an azure bus Queue
enzourl_bus_topic= enzourl + '/bsc/azurebus/sendmessagetotopic' #send a message to an azure bus Topic
topicName = 'events'
queueName = 'testqueue'

#other settings
fileName = "/home/pi/enzo/image.jpg"
headers = dict()
headers['Ocp-Apim-Subscription-Key'] = subscription_key
headers['Content-Type'] = "application/octet-stream"

lastValue = False 
camera = PiCamera()

def capturePicture(fipathToFileInDiskle):
    camera.capture(fipathToFileInDiskle) 

def isDetected(pathToFileInDisk, headers, keyword):
    params= urllib.parse.urlencode({
        "visualFeatures": "Categories,Description",
        "launguage": "en",
        })
    
    with open( pathToFileInDisk, "rb" ) as f:
        inputdata = f.read()
    body = inputdata
    iswarning = False
        
    try:
        conn = http.client.HTTPSConnection(uri_base)
        conn.request("POST", analyze_uri % params, body, headers)
        response = conn.getresponse()
        data = response.read().decode('utf-8')                 
        #print(data)
        parsed = json.loads(data)
        tags = (parsed['description']['tags'])
        #print ("response:") 

        for c in tags:
            #print(c)
            if c == keyword:
                iswarning = True
                break 

        if iswarning:
            print(keyword + ' DETECTED') 

        conn.close() 

    except Exception as e: 
            print('Error:')
            print(e) 

    return iswarning

#METHOD THAT SENDS A MESSAGE TO AN AZURE BUS QUEUE
def sendToBusQueue(message):
    print('** AZURE BUS CALL **')
    iotheaders={'authToken':enzoguid,
                '_config': busconfig,
                'body': message,
                'name': queueName,
                'label': 'SecurityApp'}
    try:
        response=requests.post(enzourl_bus_queue, headers=iotheaders)
        print(response)
    except:
        print("Error Bus:", sys.exc_info()[0]) 

# Main code starts here print('starting...') 
print('starting...')

while True:
    try:
        print("calling camera...")
        capturePicture(fileName)
        print("calling Azure Cognitive service...")
        iswarning = isDetected(fileName, headers, 'hand')
        if lastValue != iswarning:
            lastValue = iswarning
            if iswarning:
                print("A hand was detected!")
                sendToBusQueue('HAND')
    except:
        print("Error:", sys.exc_info()[0]) 

    sleep(2) 
                                    

Lab Part 5: Add Face Recognition using Cognitive Services

In this section of the lab we will add face recognition to your Raspberry Pi Zero. When your face has been recognized by Cognitive Services, the code will send another message to the Azure Bus Queue.

  • Take a clear picture of you
    To do this, leverage the raspistill utility you used in Step 2 of this lab. Just make sure your face is clearly visible and fills up the image as much as possible. The code will access this file directly (/home/pi/test.jpg).

  • Add logic to process the image
    The code will be modified to process images taken by the Raspberry Pi, and if a face is detected, it will be sent to the Face API to compare the image to yours. If a match is detected, the code will send another message in the queue.


    • Declare Variables for Enzo

      First, let's add a few more variables on top of our code.

      face_detect_uri = "/face/v1.0/detect?returnFaceId"
      face_verify_uri = "/face/v1.0/verify"
      base_face_id = ""
      base_face_file = '/home/pi/test.jpg'  # this file was created from the camera test
      


    • Add a method that calls the Face Detection method

      The Face Detection method, provided by the Microsoft Face API, returns a Face Identifier that can be used for up to 24 hours. In other words, once you process an image through the API, you can reuse it by providing its ID. Let's create a Python function that returns the Face ID of an image.

      
      #METHOD THAT RETURNS A FACEID FROM AN IMAGE STORED ON DISK
      def getFaceId(pathToFileInDisk, headers):
          
          with open( pathToFileInDisk, "rb" ) as f:
              inputdata = f.read()
          body = inputdata
          faceId = ""
              
          try:
              conn = http.client.HTTPSConnection(uri_base)
              conn.request("POST", face_detect_uri, body, headers)
              response = conn.getresponse()
              data = response.read().decode('utf-8')                 
              #print(data)
              parsed = json.loads(data)
              faceId = parsed[0]['faceId']
              #print ("response:") 
      
              conn.close() 
      
          except Exception as e: 
                  print('Error:')
                  print(e) 
      
          return faceId 
      


    • Add a method that calls the Face Verification method

      The Face Verify operation of the Face API compares two images that contain faces, and indicates if the faces are similar. The confidence level indicates how similar the two faces are. The call to this API takes two Face IDs to compare.

      
      #METHOD THAT RETURNS TRUE/FALSE IF TWO FACES ARE LIKELY TO BE THE SAME PERSON
      def compareFaces(faceId1, faceId2, headers):
          
          body = '{ "faceId1": "$1", "faceId2": "$2" }'
          body = body.replace("$1", faceId1)
          body = body.replace("$2", faceId2)
          var isSame = false
          
          try:
              conn = http.client.HTTPSConnection(uri_base)
              conn.request("POST", face_verify_uri, body, headers)
              response = conn.getresponse()
              data = response.read().decode('utf-8')                 
              #print(data)
              parsed = json.loads(data)
              isSame = parsed['isIdentical']
              #print ("response:") 
      
              conn.close() 
      
          except Exception as e: 
                  print('Error:')
                  print(e) 
      
          return isSame 
      


    • Modify the main routine to call the Face operations

      We now need to call the Face Detection method when the main routine starts, to obtain the Face ID of your face, and modify the loop to call the face detection / verification methods when a face is in front of the camera.

      
              ...
              # Get the face id of the base image - this faceId is valid for 24 hours
      faceId1 = getFaceId(base_face_file, headers)
      
      while True:
          try:
              print("calling camera...")
              capturePicture(fileName)
              print("calling Azure Cognitive service...")
              isdetected = isDetected(fileName, headers, "hand")
              if isdetected:
                  sendSMS('WARNING: A hand was detected')
                  sendToBusQueue('HAND')
              else
                  isdetected = isDetected(fileName, headers, "face")
                  if isdetected:
                      faceId2 = getFaceId(fileName, headers)
                      isSame = compareFaces(faceId1, faceId2, headers)
                      if isSame:
                          # Same face detected... send the message
                          sendToBusQueue('FACE')
              
          except:
              print("Error:", sys.exc_info()[0]) 
      
          sleep(2) 
      

                                        
    #
    #   LAB - CAPTURE IMAGE, SEND TO AZURE BUS IF HAND DETECTED AND SEND SMS MESSAGE
    #
    #   This lab provides you the basic foundation for sending messages to Enzo, and shows
    #   you how to obtain an image from a camera added to the Raspberry Pi. 
    #
    #
    #   You must set these variables before running the lab:
    #       subscription_key
    #       enzoguid
    #       phones
    #
    
    import http.client, urllib.request, urllib.parse, urllib.error, base64, json, uuid
    from picamera import PiCamera
    from time import sleep
    import time 
    import requests
    import sys 
    
    #cognitive settings
    subscription_key = "ENTER_THE_AZURE_COGNITIVE_KEY"
    uri_base = 'southcentralus.api.cognitive.microsoft.com'
    analyze_uri = "/vision/v1.0/analyze?%s"
    
    face_detect_uri = "/face/v1.0/detect?returnFaceId"
    face_verify_uri = "/face/v1.0/verify"
    base_face_id = ""
    base_face_file = '/home/pi/test.jpg'  # this file was created from the camera test
    
    #other settings
    fileName = "/home/pi/enzo/image.jpg"  # this file is created every few seconds
    headers = dict()
    headers['Ocp-Apim-Subscription-Key'] = subscription_key
    headers['Content-Type'] = "application/octet-stream"
    
    #enzo settings
    enzoguid="ENTER_YOUR_ENZO_AUTHTOKEN"
    
    enzourl="https://daas001.enzounified.com" #saving cloud data through enzo
    enzourl_bus_queue= enzourl + '/bsc/azurebus/sendmessagetoqueue' #send a message to an azure bus Queue
    enzourl_bus_topic= enzourl + '/bsc/azurebus/sendmessagetotopic' #send a message to an azure bus Topic
    busconfig = 'bscmessages'
    topicName = 'events'
    queueName = 'testqueue'
    
    lastValue = False 
    camera = PiCamera()
    
    def capturePicture(fipathToFileInDiskle):
        camera.capture(fipathToFileInDiskle) 
    
    def isDetected(pathToFileInDisk, headers, keyword):
        params= urllib.parse.urlencode({
            "visualFeatures": "Categories,Description",
            "launguage": "en",
            })
        
        with open( pathToFileInDisk, "rb" ) as f:
            inputdata = f.read()
        body = inputdata
        iswarning = False
            
        try:
            conn = http.client.HTTPSConnection(uri_base)
            conn.request("POST", analyze_uri % params, body, headers)
            response = conn.getresponse()
            data = response.read().decode('utf-8')                 
            #print(data)
            parsed = json.loads(data)
            tags = (parsed['description']['tags'])
            #print ("response:") 
    
            for c in tags:
                #print(c)
                if c == keyword:
                    iswarning = True
                    break 
    
            if iswarning:
                print(keyword + ' DETECTED') 
    
            conn.close() 
    
        except Exception as e: 
                print('Error:')
                print(e) 
    
        return iswarning 
    
    #METHOD THAT RETURNS A FACEID FROM AN IMAGE STORED ON DISK
    def getFaceId(pathToFileInDisk, headers):
        
        with open( pathToFileInDisk, "rb" ) as f:
            inputdata = f.read()
        body = inputdata
        faceId = ""
            
        try:
            conn = http.client.HTTPSConnection(uri_base)
            conn.request("POST", face_detect_uri, body, headers)
            response = conn.getresponse()
            data = response.read().decode('utf-8')                 
            #print(data)
            parsed = json.loads(data)
            faceId = parsed[0]['faceId']
            #print ("response:") 
    
            conn.close() 
    
        except Exception as e: 
                print('Error:')
                print(e) 
    
        return faceId 
    
    #METHOD THAT RETURNS TRUE/FALSE IF TWO FACES ARE LIKELY TO BE THE SAME PERSON
    def compareFaces(faceId1, faceId2, headers):
        
        body = '{ "faceId1": "$1", "faceId2": "$2" }'
        body = body.replace("$1", faceId1)
        body = body.replace("$2", faceId2)
        var isSame = false
        
        try:
            conn = http.client.HTTPSConnection(uri_base)
            conn.request("POST", face_verify_uri, body, headers)
            response = conn.getresponse()
            data = response.read().decode('utf-8')                 
            #print(data)
            parsed = json.loads(data)
            isSame = parsed['isIdentical']
            #print ("response:") 
    
            conn.close() 
    
        except Exception as e: 
                print('Error:')
                print(e) 
    
        return isSame 
    
    
    #METHOD THAT SENDS A MESSAGE TO AN AZURE BUS QUEUE
    def sendToBusQueue(message):
        print('** AZURE BUS CALL **')
        iotheaders={'authToken':enzoguid,
                    '_config': busconfig,
                    'body': message,
                    'name': queueName,
                    'label': 'SecurityApp'}
        try:
            response=requests.post(enzourl_bus_queue, headers=iotheaders)
            print(response)
        except:
            print("Error Bus:", sys.exc_info()[0]) 
    
    # Main code starts here print('starting...') 
    print('starting...')
    
    # Get the face id of the base image - this faceId is valid for 24 hours
    faceId1 = getFaceId(base_face_file, headers)
    
    while True:
        try:
            print("calling camera...")
            capturePicture(fileName)
            print("calling Azure Cognitive service...")
            isdetected = isDetected(fileName, headers, "hand")
            if isdetected:
                sendSMS('WARNING: A hand was detected')
                sendToBusQueue('HAND')
            else
                isdetected = isDetected(fileName, headers, "face")
                if isdetected:
                    faceId2 = getFaceId(fileName, headers)
                    isSame = compareFaces(faceId1, faceId2, headers)
                    if isSame:
                        # Same face detected... send the message
                        sendToBusQueue('FACE')
            
        except:
            print("Error:", sys.exc_info()[0]) 
    
        sleep(2) 
     
                                        
    

Lab Part 6: Send a Tweet when your face is recognized

This is the last section of the lab, and is opened for you to implement on your own.

In this section of the lab, we will modify what happens when your face has been recognized by the Azure Cognitive service. In addition to sending a message to a queue, you will also send a Tweet using Enzo.

Follow these instructions to register a Twitter application and add a configuration in Enzo.


See additional instructions and sample code to send a tweet with Python using Enzo.


At a high level, you will need to perform the following steps:

  • Sign up for the Enzo service (if not already done)
  • Register a Twitter application for your Enzo account
  • Create a Config Setting in Enzo for your Twitter application
  • Modify the Python code to send a tweet through Enzo