Sli.do for Teaching Assistant interactions

Oftentimes there are situations where students do not ask questions in a public setting, fearing the way they would be perceived by their peers. As a Teaching Assistant, it could prove to be hard when you are motivated to share a wealth of knowledge but you do not receive any feedback from the class on topics that are hard to grasp.

To tackle this gap and to encourage students to ask questions through a medium most comfortable to them, the use of mobile/web applications to increase audience interaction can prove to be fruitful. While doing my research, I chanced upon an app called Sli.do. Though this is usually used at conferences, Sli.do was very helpful in engaging my students to ask the questions they would otherwise not raise. By monitoring my feed in real-time, I could answer the most “Upvoted” questions and archive the ones I have answered.

Here is a small How-to post that I sent across to my class on how to use it.

EE447: Sli.do – HOW TO

  1. Download the sli.do app on your mobile phones/tablets (or) enter https://www.sli.do/ in a browser of your choice.

If using app on mobile/tablet device:

img1

If on browser:

img2

  1. An event code will be provided to you for the class
  2. Once you are logged into your event, the following screen must appear.

If using app on mobile/tablet:

img3

If on browser:

img4

 

 

  1. Click on ask button to the right bottom and type question in. Screenshots to follow are from mobile application but everything works similarly on browser.

img5

  1. Once the question is sent across, the rest of the class will be able to view the question and “upvote” it (similar to Stack Exchange/Reddit)

img6

Please make sure you upvote a question that is important to understand what is to follow in class. This will work only through enough participation from all of you.

 

  1. Another option available on the app is Polls. We may not use this feature but it is certainly good to know about it. Only the person who creates the event will be able to create a poll and view the results/infographics.
    img7

Hope this helps you create an enhanced learning experience for your students.

 

Advertisements

Google AutoDraw – The perfect tool for creative minds who can’t draw

Google released a web app, approximately a month ago, that brings to life the inner genius in even the most cack-handed artists. Introducing (albeit a little late) AutoDraw!

AutoDraw is an experiment by Google in the artificial intelligence domain, where the system uses machine learning to predict well-defined images by analysing your slapdash squiggles. The predicted visuals are created by talented artists and the dataset is constantly growing to include classes.

Google gathered information to train a neural network by using Quick, Draw! By having the community provide different variations in a drawing for a particular object, Google trained a robust system that combines the classic fields of art and machine learning.

The app is easy to use. Once the user draws on a blank screen, AutoDraw pairs the drawing with potential submissions from talented artists. The user simply has to choose the right prediction from the toolbar and, Voila! The crudely drawn creation of the user’s work is replaced with a rather slick version of it.

Google allows artists to submit their best drawings to expand the dataset, all while teaching the machine to improve its performance. AutoDraw is a creative tool that allows amateur users to create posters or colouring books.

Here is a video of my fun experience with AutoDraw.

 

Color tracking using webcam, OpenCV and Python

In this post, a simple method to obtain continuous video from a standard webcam is described. It also provides code to track a certain color from the camera feed. For this purpose, we make use of OpenCV functions in python.

To test if you are able to capture the live video feed, use the code snippet provided below.

# import the necessary packages
import cv2
#Declare variables for window
cv2.namedWindow("preview")
vc = cv2.VideoCapture(0)
# try to get the first frame
if vc.isOpened():
rval, frame = vc.read()
else:
rval = False
while rval:
cv2.imshow("preview", frame)
rval, frame = vc.read()
key = cv2.waitKey(20)
if key == 27: # Exit when ESC key is pressed
break
vc.release() #To unlock camera on windows OS
cv2.destroyWindow("preview")

If you are unable to close the window, simply press ‘Esc’. It has been assigned as the exit key in the code.

Now, to verify color tracking using HSV values, use the following code:

#Import necesessary packages
import cv2
import numpy as np
#Declare variables for window
vc = cv2.VideoCapture(0)
while(1):
# Take each frame
ret, frame = cap.read()
if(ret):
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([50, 100, 100], dtype=np.uint8)#[110,50,50]
upper_blue = np.array([70,255,255], dtype=np.uint8)#[130,255,255]
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower_blue, upper_blue)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
if k == 27: #Exit when ESC key is pressed
break
vc.release()
cv2.destroyAllWindows()

If you would like to determine HSV values for any color, please use the example below:

>> green = np.uint8([[[0,255,0]]])
>> hsv_green = cv2.cvtColor(green,cv2.COLOR_BGR2HSV)
>> print hsv_green

Now the value obtained (here, [[[60 255 255]]]) offers the base to set the range of colors to determine. Use [H-10, 100, 100] as lower bound and [H+10, 255, 255] as upper bound.

Hope this helps.