The NVIDIA Deep Learning Institute (DLI), Texas A&M Institute of Data Science, Texas A&M High Performance Research Computing, and Texas Engineering Experiment Station invite you to attend a hands-on deep learning workshop on December 14th, 2018 from 8:30AM to 5:00PM at the ILSB Auditorium exclusively for verifiable academic students, staff, and researchers.
NVIDIA DLI offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning and accelerated computing.
About This Workshop:
In this hands-on course, you will learn the basics of deep learning by training and deploying neural networks. You will:
- Implement common deep learning workflows such as Image Classification and Object Detection.
- Experiment with data, training parameters, network structure, and other strategies to increase performance and capability.
- Deploy your networks to start solving real-world problems.
On completion of this course, you will be able to start solving your own problems with deep learning.
Familiarity with the basic programming, fundamentals such as functions and variables.
NVIDIA DLI Certification:
Through built-in assessments, participants can earn certification to prove subject matter competency and support professional career growth.
08:30 Registration (breakfast provided)
09:00 Deep Learning Demystified (lecture)
10:00 Image Classification with DIGITS (hands-on lab)
12:00 Lunch (provided)
13:00 Object Detection with DIGITS (hands-on lab)
14:50 Break (refreshments & soft drinks)
15:00 Neural Network Deployment with DIGITS and TensorRT (hands-on lab)
- DLI Lab #1: Image Classification with DIGITS
Learn how to leverage deep neural networks (DNN) within the deep learning workflow to solve a real-world image classification problem using NVIDIA DIGITS. You’ll walk through the process of data preparation, model definition, model training and troubleshooting, validation testing and strategies for improving model performance using GPUs.
On completion of this lab, you will be able to use NVIDIA DIGITS to train a DNN on your own image classification application.
- DLI Lab #2: Object Detection with DIGITS
Many problems have established deep learning solutions, but sometimes the problem that you want to solve does not. Learn to create custom solutions through the challenge of detecting whale faces from aerial images by:
- Combining traditional computer vision with deep learning.
- Performing minor “brain surgery” on an existing neural network using the deep learning framework Caffe.
- Harnessing the knowledge of the deep learning community by identifying and using a purpose built network and end-to-end labeled data.
Upon completion of this lab, you will be able to solve custom problems with deep learning.
- DLI Lab #3: Neural Network Deployment with DIGITS and TensorRT
Deep learning allows us to map inputs to outputs that are extremely computationally intense. Learn to deploy deep learning to applications that recognize images and detect pedestrians in real time by:
- Accessing and understanding the files that make up a trained model
- Building from each function’s unique input and output
- Optimizing the most computationally intense parts of your application for different performance metrics like throughput and latency
Upon completion of this Lab, you will be able to implement deep learning to solve problems in the real world.
Workshop Setup Instructions:
1. Create an NVIDIA Developer account at http://courses.nvidia.com/join.
2. Make sure that WebSockets works for you:
- Test your laptop at http://websocketstest.com
- Under ENVIRONMENT, confirm that “WebSockets” is checked yes.
- Under WEBSOCKETS (PORT 80), confirm that “Data Receive,” “Send,” and “Echo Test” are checked yes.
3. If there are issues with WebSockets, try updating your browser. We recommend Chrome, Firefox, or Safari for an optimal performance.
4. Once onsite, visit http://courses.nvidia.com/dli-event and enter the event code provided by the instructor.
This workshop is brought to you by: NVIDIA, TAMIDS, HPRC, and TEES.