#MarchForScience – Tweets and Treats

“What do we want?”    

“Evidence-based Science!”

“When do we want it?”

“After Peer review!”

April 22, 2017 witnessed over 500 marches for Science around the world rally for science and its policy-making.

At different March for Science rallies, demonstrators gathered to hear a mix of scientists, politicians, and celebrities laud science as the force moving humanity forward, and demand of our leaders and government evidence-based policy. Keynote speakers included Megan Smith, Bill Nye the Science Guy, Adam Savage, Mayim Bialik and Prof. Manu Prakash amongst others. They acknowledged the vital role science plays in our lives and the need to respect and encourage research that gives us insight into the world.

There were folks dressed in lab coats and pink knit brain hats. There were costumed characters and festooned pets.  Across the nation and abroad, as thousands of scientists and their supporters convened on Earth Day to defend science against proposed government cuts and political interference, many got their messages across with colorful and candid protest signs. I could not personally make it to the march (who else hates falling sick when something fantastic is going on?!) However, I followed the marches across the world on Twitter.

Here are a few signs/tweets I fell in love with:

Though this was not a tweet from Tyson on the day of Science March, it stresses the importance of science and research unlike anything else.

Scientists and citations. This one got it all right in one tweet. I had a stupid smile on my face while reading this. NIH is life. Literally.

The electrical engineer in me squealed a little when I saw the following sign. This was then inducted into my list of favorite tweets (Get it, get it… ;))

Boy, did they get this right! A woman’s place is in the lab. Agree? (:

Yes. Yes. YES.

I am not one for mixing science and politics. But with the proposed budget cuts affects funding for research where it is needed most, politicians force scientists’ hands.  Also, who doesn’t love Katie Mack!?!

When we have support from one pole to another, you know we did this right!

Here are some cute brain hats for you. Pretty neat, don’t you think?

This was only the beginning. The following week (April 23-29, 2017) will be a “Week of Action”. Please read more about it on the March for Science blog. You could also join a Satellite near you here.

For those of you interested in meeting other scientists, the World Science Festival, founded by Prof. Brian Greene, will reconvene for the 10th annual World Science Festival in New York City from May 30, 2017 to June 4, 2017. Watch out for tickets here.

To wrap up, this was the message I had planned for my sign:

May the facts be with you.

Advertisements

#GPUniversity: Deep Learning and Beyond

NVIDIA hosted GPUniversity, a day of talks and a hands-on workshop on Deep Learning. This was held in the Husky Union Building (HUB) at University of Washington, Seattle on 14 April, 2017. The workshop was organized to discuss the future of Artificial Intelligence computing and discover how Graphics Processing Units (GPUs) are powering this revolution.

The day had a solid lineup of speakers (Stan Birchfield, Nvidia and Prof. Ali Farhadi, UW-Seattle) and a workshop on Signal Processing using NVIDIA digits.

The talks started at 10:30 am, with Dr. Stan Birchfiled presenting on ‘Deep Learning for Autonomous Drone Flying through Forest Trails‘. He is a Principal Research Scientist at NVIDIA, Seattle. Dr. Birchfield provided us a brief overview of three major projects happening at NVIDIA. The first project described how NVIDIA is currently looking at replacing the Image Signal Processor (ISP), which is a collection of modules like auto exposure, denoise, demosaic, amongst others, with a deep learning network. Here is a blog post from NVIDIA that could provide some information on the advances in deep learning.

The second project was about their efforts to reduce driver distraction. By making use of data from inside the car, the head pose and gaze of the driver are estimated. A different research team at NVIDIA is also researching on the use of hand gestures for automotive interfaces. Having worked on gesture recognition using a standard camera and computer vision algorithms, this research excites me. Their most recent paper can be found in CVPR2016.

He finally addressed the topic of image-to-image translation before speaking about his research. Image-to-image translation would allow one to shift images from a day view to night, from a sunny image to rainy, or from RGB to IR. The possibilities are endless. The system takes a raw image as input and provides a final image as output. Here is a publication by NVIDIA I found on the topic.

This was followed by information about Dr. Birchfield’s research on autonomous flight of drones in forests. Most drone enthusiasts have found it hard to navigate their autonomous aerial vehicles in the forest. The trees create multipath effect and attenuate/block the signal, resulting in GPS being unreliable. However, if this problem could be solved, drones could serve multiple functions – search and rescue, environmental mapping, personal videography, and of course, drone racing!

NVIDIA’s approach to the problem eliminates the use of GPS (at this stage) and uses deep learning for computer vision instead. Their research is done using micro aerial vehicles (MAV). For this purpose, they make use of the 3DR Iris+ with a Jetson Tegra TX1 processor. By the method of imitation learning (used in NVIDIA self-driving cars), the drone is taught to fly along a trail and stop at a safe distance if a human is detected. The dataset makes use of prior research from University of Zurich (Giusti et al. 2016) and the data collected from Pacific Northwest trails. The system also makes use of  DSO and YOLO algorithms. The distribution mismatch was fixed by adding three cameras instead of just one. A detailed talk about this research will be presented at the GPU Technology Conference in May. You can follow the research here.

Professor Ali Farhadi had an interactive session on Visual Intelligence. He started his presentation by showcasing the performance of YOLO in real-time.

IMG_5238
YOLO in real-time using a mobile phone

An additional demo that followed briefed the design of a 5$ computer to detect people. This was built using a Raspberry Pi Zero.

Prof. Farhadi took us through a number of projects in his 45-minute talk. The man never fails to impress (I have been in his class and he is an inspiring teacher!) I am going to provide a brief description of these projects and add links to publications/research websites below.

Visual recognition involves visual knowledge, data, parsing and visual reasoning. The action-centric view of visual recognition involves three parts: recognizing actions, predicting expected outcomes and devising a plan. The projects discussed include all these factors.

  1. imsitu.org : It is used for situation recognition, as opposed to treating all the components of an image as objects. This enables the system to not just predict the objects or locations, but include information on the activity being performed and the roles of the participants performing the activity. The demo provided on the website implements Compositional Conditional Random field, pre-trained using semantic data augmentation on 5 million web images.
    Go ahead and try it here.
  2. Learn EVerything about ANything (LEVAN): Single camera systems pose a problem when size is a determining factor for visual intelligence. However, if we are able to understand the average sizes of objects, we could make better predictions by imposing a distribution. LEVAN acts as a visual encyclopedia for you, helping you explore and understand in detail any topic that you are curious about.
    Try the demo here. If it does not have a concept you are looking for, click and add it to the database! 🙂
  3. Visual Knowledge Extraction Engine (VisKE): To briefly describe it, VisKE does visual fact checking. It provides the most probable explanation based on the visual image off the internet. It generates a factor graph that assigns scores based on how much it visually trusts the information.
    Try the demo here.
  4. Visual Newtonian Dynamics (VIND): VIND predicts the dynamics of query objects in static images. The dataset compiled includes videos aligned with Newtonian scenarios represented using game engines, and still images with their ground truth dynamics. A Newtonian neural network performs the correlation.
  5. What Happens if?: By making use of the Forces in Scenes (ForScene) dataset from the University of Washington, and using a combination of Recurrent Neural Nets with Convolutional Neural Nets, this project aims to understand the effect of external forces on objects. The system makes sequential predictions based on the force vector applied to a specific location.
  6. AI2 THOR Framework: THOR is the dataset of visually realistic scenes rendered for studying actions based on visual input.

Hope these projects shed more light on the possibilities in Computer Vision and Deep Learning.

IMG_5241
GPUniversity Workshop about Deep Learning Institute

If you would like to get your hands dirty, try nvlabs.qwiklab.com for access to NVIDIA DIGITS or courses mentioned on the Deep Learning Institute website.

Depression: Why it is important to talk

April 7, 2017. World Health Day. This year the World Health Organization (WHO) is leading a campaign to raise awareness about depression (Depression: Let’s talk), currently the number one cause of disability. Let’s talk about that.

According to a recently updated fact sheet by WHO, over 300 million people are affected by depression worldwide. The numbers are only increasing. With high-stress jobs and increasing social pressure to perform, anxiety and depression affect people of all ages, from all walks of lives, in all countries. So if you are one among the 300 million, know this…

You are not alone.

Depression is common. It is a medical condition affecting the brain, much like a tumor or Parkinson’s disease. There is nothing wrong with you.

Similar to tumors or any other illness, depression can be treated. It is one of the best documented but least discussed health problems. The stats provide staggering numbers behind the number of suicides due to depression. However, if we take a closer look, a number of these could have been prevented if the signs were detected early on in a person’s life. A majority of the population experiencing mental health do not receive any form of care.

man-2037255_1920
You are stronger than you know

One of the key problems in our society is the huge stigma around talking about depression and mental health. Having been raised to believe that feeling vulnerable is a weakness and shows personal inadequacy, it is hard for most people to discuss their emotions. The stigmatization of depression does nothing to help those grappling with depression — in fact, treating depression as a personal problem rather than an illness can deter depressed people from seeking professional help and cause them to feel guilty instead.

We need to break the stigma around depression.

Research has evidenced the benefits of voicing thoughts and feelings as a step towards recovery. Psychotherapy, commonly referred to as talk therapy, is designed to relieve despondence of patients by providing a mental toolkit that challenges negative thoughts. This kind of therapy helps us learn about ourselves in such a deep and broad way that we can utilize our understanding in a variety of situations.

For those of you trying to help a dear one, take a minute to look at this article. It is important to educate yourself about the dont’s during a conversation. It is crucial not to dismiss or belittle someone’s condition while trying to help them. Sometimes it is okay to just listen.

For those of you battling (Yes, that’s the word I chose to use… cuz you’re more brave than most!) depression, I understand that during a time when the word ‘Depression’ is used loosely, it is hard to gauge if people fully understand what you are undergoing. It may feel impossible to explain the helpless feelings you have inside to others, or to even gather the strength to confide in someone. But making the choice of talking to someone you trust could help you understand you are not in this alone. You don’t have to fight this fight alone.

There are many paths to recovery and each person’s may be different. Whether you attend self-help groups, speak to a clinician, seek medication, or simply speak to loved ones, it’s important to share your feelings. Explaining your condition and symptoms will help those around you, including yourself, understand what it is you’re going through.

I am not a therapist. I will probably not have the best solutions to your problems. But if you need someone to listen to you, know that I can be that person. An ally in this crazy, beautiful world… Where you belong.