#MeToo – IT WAS NOT YOUR FAULT!

The #MeToo campaign started on Twitter Sunday afternoon. It seeped into Facebook status messages and as images on Instagram. If you are confused about what it is, let me explain… But know this, if someone shared a status message with #MeToo, do not ask them to share their personal story. Butt out. You are not entitled to their story. It is none of your business. If someone did not share the message on their wall, it does not necessarily mean they have not been through sexual abuse. They would just like their space. Respect that.

How it started –

Over the last few days, Harvey Weinstein, a Hollywood producer, has been making headlines for sexual harassment and sexual assault allegations. Here is a report documenting some cases of those who have been victims of Weinstein’s predatory behavior. He has over 30 different sexual abuse allegations to his name. However, sexual assault/sexual harassment is not restricted to people in any one profession. On Sunday, actress Alyssa Milano took to Twitter and posted the following:

Why it started –

For those of you who do not understand the point of this, take a few minutes to give this issue your undiverted attention.

I have been a victim of sexual assault and sexual harassment.

To some, it might sound like a daily affair and that victims should “move on”. To some, it might sound like a sensitive topic that they should not talk further about in public. To some, it might sound like a bunch of words strung together and nothing more. But to most, it sends chills down their spine because they have been there too.

Let me tell you something about sexual harassment/assault. Your profession does not matter. The length of your skirt does not matter. Your skin color does not matter. Your nationality or the language you speak does not matter. Your social status does not matter. Your gender or sexual orientation does not matter. Hell, it does not matter if you are 6 or 65 years old.  It does not matter if you say “Please stop” or “No!”, irrespective of whether it is said feebly or firmly. But what does seem to work sometimes is saying “I have a boyfriend”. Why? Because some men respect only the thoughts of other men. They feel entitled to sexual favors from the rest of us. The need for power and control is what drives most violators.  

Women are sexually harassed by men at least once in their lifetime, and if that does not scare you, you should be worried. There are, of course, varying levels of assault and harassment.

It should not all be equated to one problem.

For those of you who have been following #MeToo on Twitter/Facebook/Instagram, you would have read stories of women, men and non-binary folks who have been victims of sexual abuse. Some, multiple times. People who do not understand the gravity of these reports take sexual harassment lightly. Sexual harassment and sexual assault range from unwelcome advances to inappropriate conduct (with or without physical contact) to rape. None of these should be trivialized.

Here is the story of my best friend who was one such girl, born and raised in what one would term a “low-risk” family. As a seven-year-old, she trusted the neighbor (of over seven years) from her village. Silly seven-year-olds trust most people. The result of that blind trust was him sliding his hands into her trousers. She voiced out to her grandmother about the man being a predator, that smart seven-year-old. Sadly, it was brushed off as a child not knowing the world enough. That was just the beginning. Men followed and made unwarranted sexual advances as the now 12-year old girl made her way to her evening classes. She voiced out, but only to see her voice being drowned out by the nonchalant attitude of evening commuters. She watched her best friend being groped in front of her and sexually frustrated men make advances at different instances. She screamed for help, but help never came. All of this and more before she was 15. “Ensure you do not provoke them”, they later said. She did not dress provocatively. She was not drunk. She was not walking outside in the wee hours of the night. She was merely a child. It does not matter who she is, or where she is from. What matters is her story and I really hope it bothers you.

I understand the annoying question that has been running in the back of your mind – Why go public about personal stories? Well, no problem is solved if everyone decides to shut up. No awareness is raised if nobody speaks up. No help is sought if people are not sure who will be blamed by the end of a conversation – the victim or the person whose fault it truly was.  

The reason victims take so long to tell their story is because of the shame and guilt that society tends to impose on them, rather than the perpetrator. Know this, it is NEVER the victim’s fault. It is incredibly hard to come to terms with that. People who do not understand the issue for what it truly is propagate victim-blaming and victim-shaming. Don’t be that person.

For those of you who know me, know that I am not one to watch silently when someone makes unwelcome advances. But I do not expect everyone to do the same. People are different like that. Join forces in acknowledging that. Understand this could be happening to anyone around you, even if they are not voicing it out or reacting the way you would. Learn to break free from the shackles of being a bystander. If everyone acted in a timely manner, more lives can be shielded from the horrors of sexual abuse every year.

Let me end this post by stating very clearly that raising awareness is only a part of the issue that we, as individuals, can solve. Sexual misconduct is a widespread epidemic in our world, one we can’t get rid of until those responsible for it are punished for their misdeeds. Remember, the disease starts with the perpetrator, not the victim. 

So be aware of the problem. Acknowledge it. Understand it. Always make sure you act to protect the victim.

Advertisements

My first Tech Trivia Night!

On 25 July, 2017 I attended my first ever Trivia Night at the Facebook Seattle office. What made the event more interesting to attend was the theme — “Women in Tech”.

For the longest time, I was apprehensive of attending a Trivia Night because I did not know if I would contribute much. Since the theme was something I was fairly comfortable with, I decided to be a part of some evening madness. After getting through the event, I realized how much I have been missing it out on! Read on to understand why…

Facebook Seattle hosted its first women in tech trivia event at its stunning sundeck on a sunny Seattle evening. Check out the view of Downtown Seattle from the sundeck on the featured picture! (Lesson: You get fabulous views during tech trivia nights during summer)

IMG_8351
(L) Jennifer Margolis and (R) April Wixom

 

Facebook’s very own April Wixom and Jennifer Margolis were on the mic, adding some fun and flare to the engaging trivia questions. The environment provided the right balance to unwind and meet like-minded women who enjoyed tech history as much as I did. There was serious competition in the air as each team was competing to win some Facebook Growlers, shot glasses and other Facebook swag. They were all equally capable and it was incredibly exciting to see who did the best at the end of each round. The team effort put forth to answer each question was my favorite part of the evening. (Lesson: It’s okay to not know all the answers!)

The Trivia night consisted of five parts — (1) True or False, (2) Women in Tech and History, (3) All things Computer Science, (4) Movie Quotes, and (5) Picture round based on Facebook Open Source Projects.

Each round consisted of ten questions. The first round was not something we were particularly proud of, but it was interesting nevertheless! The questions included “The color orange was named after the fruit” and “Can turtles breathe through their butt?”. We got 6 out 10. But I think I learned the most during this round. (Lesson: Quirky facts that you take back home will result in a lot of time being spent on learning new facts!) Don’t you love when that happens?

What followed is something my team and I are incredibly proud of. We got our women in tech and history, and computer science questions all right! Not that it validates what we do but that is how much we have been inspired by reading about the field. If there is one thing I believe in (disregarding all the fake news articles that we are presented with these days):

You are what you have read.

The questions were about prominent figures such as Grace Hopper, Anita Borg, Margaret Hamilton and other techs such as Enigma, ENIAC, etc. Jennifer thoroughly enjoyed hosting the round on movie quotes, voicing each one the way the character does. From Casablanca to The Wizard of Oz, from A Streetcar Named Desire to The Titanic. We did surprisingly well in said round as well! (Lesson: You will surprise yourself at Trivia Nights!) The final round of picture quotes was gripping. We got the questions right by simply reading code though we were not fully aware of all the (Awesome!) Open Source projects that Facebook has going. Though we did not entirely make up for the points we lost in the first round, we ended up in fourth place and experienced a fun evening geeking out. (Lesson: Life is so much easier when you have well-commented source code) Here is a list of all the open source projects Facebook has made available to the public.

It was a memorable event with friends (old and new), good food and a stunning view of Downtown and  Lake Union. There were times when I did not know the answers to questions, there were times when I forgot, but my time was not wasted. Don’t deprive yourself of all the learning and fun you can have at trivia nights the way I did for the last couple of years.

Could not make it out to a Tech Trivia? Keep an eye out on my Twitter feed for a similar event. The organizers at Facebook are also determined to put out something more challenging in the near future! Or just head out to learn new things about anything under the sun!

Sli.do for Teaching Assistant interactions

Oftentimes there are situations where students do not ask questions in a public setting, fearing the way they would be perceived by their peers. As a Teaching Assistant, it could prove to be hard when you are motivated to share a wealth of knowledge but you do not receive any feedback from the class on topics that are hard to grasp.

To tackle this gap and to encourage students to ask questions through a medium most comfortable to them, the use of mobile/web applications to increase audience interaction can prove to be fruitful. While doing my research, I chanced upon an app called Sli.do. Though this is usually used at conferences, Sli.do was very helpful in engaging my students to ask the questions they would otherwise not raise. By monitoring my feed in real-time, I could answer the most “Upvoted” questions and archive the ones I have answered.

Here is a small How-to post that I sent across to my class on how to use it.

EE447: Sli.do – HOW TO

  1. Download the sli.do app on your mobile phones/tablets (or) enter https://www.sli.do/ in a browser of your choice.

If using app on mobile/tablet device:

img1

If on browser:

img2

  1. An event code will be provided to you for the class
  2. Once you are logged into your event, the following screen must appear.

If using app on mobile/tablet:

img3

If on browser:

img4

 

 

  1. Click on ask button to the right bottom and type question in. Screenshots to follow are from mobile application but everything works similarly on browser.

img5

  1. Once the question is sent across, the rest of the class will be able to view the question and “upvote” it (similar to Stack Exchange/Reddit)

img6

Please make sure you upvote a question that is important to understand what is to follow in class. This will work only through enough participation from all of you.

 

  1. Another option available on the app is Polls. We may not use this feature but it is certainly good to know about it. Only the person who creates the event will be able to create a poll and view the results/infographics.
    img7

Hope this helps you create an enhanced learning experience for your students.

 

Google AutoDraw – The perfect tool for creative minds who can’t draw

Google released a web app, approximately a month ago, that brings to life the inner genius in even the most cack-handed artists. Introducing (albeit a little late) AutoDraw!

AutoDraw is an experiment by Google in the artificial intelligence domain, where the system uses machine learning to predict well-defined images by analysing your slapdash squiggles. The predicted visuals are created by talented artists and the dataset is constantly growing to include classes.

Google gathered information to train a neural network by using Quick, Draw! By having the community provide different variations in a drawing for a particular object, Google trained a robust system that combines the classic fields of art and machine learning.

The app is easy to use. Once the user draws on a blank screen, AutoDraw pairs the drawing with potential submissions from talented artists. The user simply has to choose the right prediction from the toolbar and, Voila! The crudely drawn creation of the user’s work is replaced with a rather slick version of it.

Google allows artists to submit their best drawings to expand the dataset, all while teaching the machine to improve its performance. AutoDraw is a creative tool that allows amateur users to create posters or colouring books.

Here is a video of my fun experience with AutoDraw.

 

Color tracking using webcam, OpenCV and Python

In this post, a simple method to obtain continuous video from a standard webcam is described. It also provides code to track a certain color from the camera feed. For this purpose, we make use of OpenCV functions in python.

To test if you are able to capture the live video feed, use the code snippet provided below.

# import the necessary packages
import cv2
#Declare variables for window
cv2.namedWindow("preview")
vc = cv2.VideoCapture(0)
# try to get the first frame
if vc.isOpened():
rval, frame = vc.read()
else:
rval = False
while rval:
cv2.imshow("preview", frame)
rval, frame = vc.read()
key = cv2.waitKey(20)
if key == 27: # Exit when ESC key is pressed
break
vc.release() #To unlock camera on windows OS
cv2.destroyWindow("preview")

If you are unable to close the window, simply press ‘Esc’. It has been assigned as the exit key in the code.

Now, to verify color tracking using HSV values, use the following code:

#Import necesessary packages
import cv2
import numpy as np
#Declare variables for window
vc = cv2.VideoCapture(0)
while(1):
# Take each frame
ret, frame = cap.read()
if(ret):
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([50, 100, 100], dtype=np.uint8)#[110,50,50]
upper_blue = np.array([70,255,255], dtype=np.uint8)#[130,255,255]
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower_blue, upper_blue)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
k = cv2.waitKey(5) & 0xFF
if k == 27: #Exit when ESC key is pressed
break
vc.release()
cv2.destroyAllWindows()

If you would like to determine HSV values for any color, please use the example below:

>> green = np.uint8([[[0,255,0]]])
>> hsv_green = cv2.cvtColor(green,cv2.COLOR_BGR2HSV)
>> print hsv_green

Now the value obtained (here, [[[60 255 255]]]) offers the base to set the range of colors to determine. Use [H-10, 100, 100] as lower bound and [H+10, 255, 255] as upper bound.

Hope this helps.

#MarchForScience – Tweets and Treats

“What do we want?”    

“Evidence-based Science!”

“When do we want it?”

“After Peer review!”

April 22, 2017 witnessed over 500 marches for Science around the world rally for science and its policy-making.

At different March for Science rallies, demonstrators gathered to hear a mix of scientists, politicians, and celebrities laud science as the force moving humanity forward, and demand of our leaders and government evidence-based policy. Keynote speakers included Megan Smith, Bill Nye the Science Guy, Adam Savage, Mayim Bialik and Prof. Manu Prakash amongst others. They acknowledged the vital role science plays in our lives and the need to respect and encourage research that gives us insight into the world.

There were folks dressed in lab coats and pink knit brain hats. There were costumed characters and festooned pets.  Across the nation and abroad, as thousands of scientists and their supporters convened on Earth Day to defend science against proposed government cuts and political interference, many got their messages across with colorful and candid protest signs. I could not personally make it to the march (who else hates falling sick when something fantastic is going on?!) However, I followed the marches across the world on Twitter.

Here are a few signs/tweets I fell in love with:

Though this was not a tweet from Tyson on the day of Science March, it stresses the importance of science and research unlike anything else.

Scientists and citations. This one got it all right in one tweet. I had a stupid smile on my face while reading this. NIH is life. Literally.

The electrical engineer in me squealed a little when I saw the following sign. This was then inducted into my list of favorite tweets (Get it, get it… ;))

Boy, did they get this right! A woman’s place is in the lab. Agree? (:

Yes. Yes. YES.

I am not one for mixing science and politics. But with the proposed budget cuts affects funding for research where it is needed most, politicians force scientists’ hands.  Also, who doesn’t love Katie Mack!?!

When we have support from one pole to another, you know we did this right!

Here are some cute brain hats for you. Pretty neat, don’t you think?

This was only the beginning. The following week (April 23-29, 2017) will be a “Week of Action”. Please read more about it on the March for Science blog. You could also join a Satellite near you here.

For those of you interested in meeting other scientists, the World Science Festival, founded by Prof. Brian Greene, will reconvene for the 10th annual World Science Festival in New York City from May 30, 2017 to June 4, 2017. Watch out for tickets here.

To wrap up, this was the message I had planned for my sign:

May the facts be with you.

#GPUniversity: Deep Learning and Beyond

NVIDIA hosted GPUniversity, a day of talks and a hands-on workshop on Deep Learning. This was held in the Husky Union Building (HUB) at University of Washington, Seattle on 14 April, 2017. The workshop was organized to discuss the future of Artificial Intelligence computing and discover how Graphics Processing Units (GPUs) are powering this revolution.

The day had a solid lineup of speakers (Stan Birchfield, Nvidia and Prof. Ali Farhadi, UW-Seattle) and a workshop on Signal Processing using NVIDIA digits.

The talks started at 10:30 am, with Dr. Stan Birchfiled presenting on ‘Deep Learning for Autonomous Drone Flying through Forest Trails‘. He is a Principal Research Scientist at NVIDIA, Seattle. Dr. Birchfield provided us a brief overview of three major projects happening at NVIDIA. The first project described how NVIDIA is currently looking at replacing the Image Signal Processor (ISP), which is a collection of modules like auto exposure, denoise, demosaic, amongst others, with a deep learning network. Here is a blog post from NVIDIA that could provide some information on the advances in deep learning.

The second project was about their efforts to reduce driver distraction. By making use of data from inside the car, the head pose and gaze of the driver are estimated. A different research team at NVIDIA is also researching on the use of hand gestures for automotive interfaces. Having worked on gesture recognition using a standard camera and computer vision algorithms, this research excites me. Their most recent paper can be found in CVPR2016.

He finally addressed the topic of image-to-image translation before speaking about his research. Image-to-image translation would allow one to shift images from a day view to night, from a sunny image to rainy, or from RGB to IR. The possibilities are endless. The system takes a raw image as input and provides a final image as output. Here is a publication by NVIDIA I found on the topic.

This was followed by information about Dr. Birchfield’s research on autonomous flight of drones in forests. Most drone enthusiasts have found it hard to navigate their autonomous aerial vehicles in the forest. The trees create multipath effect and attenuate/block the signal, resulting in GPS being unreliable. However, if this problem could be solved, drones could serve multiple functions – search and rescue, environmental mapping, personal videography, and of course, drone racing!

NVIDIA’s approach to the problem eliminates the use of GPS (at this stage) and uses deep learning for computer vision instead. Their research is done using micro aerial vehicles (MAV). For this purpose, they make use of the 3DR Iris+ with a Jetson Tegra TX1 processor. By the method of imitation learning (used in NVIDIA self-driving cars), the drone is taught to fly along a trail and stop at a safe distance if a human is detected. The dataset makes use of prior research from University of Zurich (Giusti et al. 2016) and the data collected from Pacific Northwest trails. The system also makes use of  DSO and YOLO algorithms. The distribution mismatch was fixed by adding three cameras instead of just one. A detailed talk about this research will be presented at the GPU Technology Conference in May. You can follow the research here.

Professor Ali Farhadi had an interactive session on Visual Intelligence. He started his presentation by showcasing the performance of YOLO in real-time.

IMG_5238
YOLO in real-time using a mobile phone

An additional demo that followed briefed the design of a 5$ computer to detect people. This was built using a Raspberry Pi Zero.

Prof. Farhadi took us through a number of projects in his 45-minute talk. The man never fails to impress (I have been in his class and he is an inspiring teacher!) I am going to provide a brief description of these projects and add links to publications/research websites below.

Visual recognition involves visual knowledge, data, parsing and visual reasoning. The action-centric view of visual recognition involves three parts: recognizing actions, predicting expected outcomes and devising a plan. The projects discussed include all these factors.

  1. imsitu.org : It is used for situation recognition, as opposed to treating all the components of an image as objects. This enables the system to not just predict the objects or locations, but include information on the activity being performed and the roles of the participants performing the activity. The demo provided on the website implements Compositional Conditional Random field, pre-trained using semantic data augmentation on 5 million web images.
    Go ahead and try it here.
  2. Learn EVerything about ANything (LEVAN): Single camera systems pose a problem when size is a determining factor for visual intelligence. However, if we are able to understand the average sizes of objects, we could make better predictions by imposing a distribution. LEVAN acts as a visual encyclopedia for you, helping you explore and understand in detail any topic that you are curious about.
    Try the demo here. If it does not have a concept you are looking for, click and add it to the database! 🙂
  3. Visual Knowledge Extraction Engine (VisKE): To briefly describe it, VisKE does visual fact checking. It provides the most probable explanation based on the visual image off the internet. It generates a factor graph that assigns scores based on how much it visually trusts the information.
    Try the demo here.
  4. Visual Newtonian Dynamics (VIND): VIND predicts the dynamics of query objects in static images. The dataset compiled includes videos aligned with Newtonian scenarios represented using game engines, and still images with their ground truth dynamics. A Newtonian neural network performs the correlation.
  5. What Happens if?: By making use of the Forces in Scenes (ForScene) dataset from the University of Washington, and using a combination of Recurrent Neural Nets with Convolutional Neural Nets, this project aims to understand the effect of external forces on objects. The system makes sequential predictions based on the force vector applied to a specific location.
  6. AI2 THOR Framework: THOR is the dataset of visually realistic scenes rendered for studying actions based on visual input.

Hope these projects shed more light on the possibilities in Computer Vision and Deep Learning.

IMG_5241
GPUniversity Workshop about Deep Learning Institute

If you would like to get your hands dirty, try nvlabs.qwiklab.com for access to NVIDIA DIGITS or courses mentioned on the Deep Learning Institute website.

Depression: Why it is important to talk

April 7, 2017. World Health Day. This year the World Health Organization (WHO) is leading a campaign to raise awareness about depression (Depression: Let’s talk), currently the number one cause of disability. Let’s talk about that.

According to a recently updated fact sheet by WHO, over 300 million people are affected by depression worldwide. The numbers are only increasing. With high-stress jobs and increasing social pressure to perform, anxiety and depression affect people of all ages, from all walks of lives, in all countries. So if you are one among the 300 million, know this…

You are not alone.

Depression is common. It is a medical condition affecting the brain, much like a tumor or Parkinson’s disease. There is nothing wrong with you.

Similar to tumors or any other illness, depression can be treated. It is one of the best documented but least discussed health problems. The stats provide staggering numbers behind the number of suicides due to depression. However, if we take a closer look, a number of these could have been prevented if the signs were detected early on in a person’s life. A majority of the population experiencing mental health do not receive any form of care.

man-2037255_1920
You are stronger than you know

One of the key problems in our society is the huge stigma around talking about depression and mental health. Having been raised to believe that feeling vulnerable is a weakness and shows personal inadequacy, it is hard for most people to discuss their emotions. The stigmatization of depression does nothing to help those grappling with depression — in fact, treating depression as a personal problem rather than an illness can deter depressed people from seeking professional help and cause them to feel guilty instead.

We need to break the stigma around depression.

Research has evidenced the benefits of voicing thoughts and feelings as a step towards recovery. Psychotherapy, commonly referred to as talk therapy, is designed to relieve despondence of patients by providing a mental toolkit that challenges negative thoughts. This kind of therapy helps us learn about ourselves in such a deep and broad way that we can utilize our understanding in a variety of situations.

For those of you trying to help a dear one, take a minute to look at this article. It is important to educate yourself about the dont’s during a conversation. It is crucial not to dismiss or belittle someone’s condition while trying to help them. Sometimes it is okay to just listen.

For those of you battling (Yes, that’s the word I chose to use… cuz you’re more brave than most!) depression, I understand that during a time when the word ‘Depression’ is used loosely, it is hard to gauge if people fully understand what you are undergoing. It may feel impossible to explain the helpless feelings you have inside to others, or to even gather the strength to confide in someone. But making the choice of talking to someone you trust could help you understand you are not in this alone. You don’t have to fight this fight alone.

There are many paths to recovery and each person’s may be different. Whether you attend self-help groups, speak to a clinician, seek medication, or simply speak to loved ones, it’s important to share your feelings. Explaining your condition and symptoms will help those around you, including yourself, understand what it is you’re going through.

I am not a therapist. I will probably not have the best solutions to your problems. But if you need someone to listen to you, know that I can be that person. An ally in this crazy, beautiful world… Where you belong.

International Women’s Day celebration with Women Techmakers: What I learnt

International Women’s Day is observed on March 8th of every year. It is a global calling to celebrate the social, economic, cultural and political achievements of women. It is also a time to reflect on the way we have progressed, and to encourage ordinary women to do extraordinary things for their countries and communities.

This year, Women Techmakers is hosting summits at Google offices across the globe to celebrate International Women’s Day. The summits last for an entire day, with talks, discussions and hands-on activities by various partners, such as Tensorflow, Speechless and more! These, unlike the Women Techmakers meetups, are invitation-only. The attendees are chosen through an online application available on their website. It requires a short essay on “What are you passionate about solving for in your current role?“. I crossed my fingers and poured my heart into the essay I sent across. I was fortunate to be extended an invitation to attend the summit at Google Kirkland on March 4th.

On the day of:

The surprisingly sunny Seattle weather offered the most stunning view of the Kirkland office. Here is a picture of what I saw.

IMG_3955
Google Kirkland office on a warm, sunny day

The atmosphere in the welcome room was incredibly warm and friendly. The theme of the summit was “Telling your story”. To get started on this, we were given name tags to which we could attach three qualities we best associate with. This served as an easy conversation starter, which was a blessing for an introvert such as myself. After exchanging greetings with women from companies big and small, and downing a sumptuous breakfast, we headed into a room full of ~150 talented women.

IMG_3962
Name tags and some “swag”

The day started with Olga Garcia, Engineering Program Manager at Google, giving us something to think about. She paraphrased a line from the poem “Our Grandmothers” by Maya Angelou:

I come as one, but I stand as ten thousand.

It makes us ponder about the journey of those who have paved the way for us, and instills the confidence in us to do the same for others.

Senator Patty Kuderer welcomed us with her story of fighting to bring gender parity in the state of Washington. She is also an advocate of change for introducing more girls to STEM, something she had to forgo during her school days. This was succeeded by a Keynote address by Thais Melo, a Tech Lead Manager in Google Cloud. Her journey of going from coding in the corner to leading a team evidences that we can take on any role as long as we believe in our abilities.

The Stories of Success panel that followed the Keynote opened us to stories of four brilliant women: Sara Adineh, Nikisha Reyes-Grange, Angel Tian and Heather Sherman. We were introduced to wide-ranging experiences from their respective fields and were given an insight into how they dealt with challenges they faced.

After lunch, we split into two groups to attend fun workshops; Group 1 headed to “An Introduction to Tensor Flow” and Group 2 was a part of “Develop Your Story: An Interactive Workshop”. I chose to be a part of the latter, excited to learn from Kimberly MacLean. It was two hours of learning to voice our stories through group discussions, role play, and fun games. This included “Yes, and…”, Portkey exercise (YES! The workshop was lead by someone who gets Harry Porter!), story building, and working on our elevator pitch. Time flew by and I made some very good friends through the exercises. I also learned from others that the TensorFlow workshop was just as much fun; why can’t we be in two places at the same time!

With all our newly found knowledge and energy, we moved into the last session of the day – An evening with Waymo! Waymo is an autonomous driving company spun out of Alpabet Inc., in December 2016. Being a Robotics Engineer with experience in Computer Vision and Machine Learning, this session was the perfect blend to geek out! The interactive activity required us to think from the perspective of a software engineer, a systems engineer, a project manager and a mechanical engineer. We went through the design process for different scenarios, such as snow, rain and crowded neighborhoods.

IMG_3969
Women Techmakers Seattle: Success is…

To learn more about what happened at all other summits, follow #WTM17 on Twitter or Google Plus.