The Centre for Advanced Computing offers a wide variety of workshops and training opportunities to advance and maintain your HQP’s skills. Our user support and cognitive development specialists deliver training to any experience level, from Introductory Linux to advanced parallel programming optimization. Hot topics such as cloud computing, Apache Spark, and data analytics are available.
Looking for something outside of our standard curriculum? We will work with your requirements to meet your team’s training needs.
Data and Pipelines Workshops:
Data Understanding: 2 hours
This is an introductory workshop to Data Analytics. It starts by introducing the Data Analytics pipeline and its processes. Then, it discusses the different statistical and visualization approaches for conducting Exploratory and Descriptive Analytics on data to answer the question of “What happened in the past?”
Text Mining: 4 hours
Text mining is the process of extracting meaning, patterns and trends from unstructured textual data. Massive amounts of unstructured text are prevalent in today. Traditional machine learning algorithms handle only numerical or categorical data. Existing data analytical platforms provide special components to facilitate the analysis of textual data. This workshop introduces the topic of text mining and provides a tour with hands-on exercises and demonstrations of four texting mining tools, each of which supports an interesting and diverse set of features
Data Preparation: 4 hours
The Data Preparation workshop covers the different approaches for preparing data. This includes data cleaning, missing values handling, outlier detection and handling, feature transformation and the art of feature engineering which is considered one of the most vital operations in the Data Analytics process.
Machine Learning Workshops:
Unsupervised Learning: 3 hours
This workshop introduces Unsupervised Learning approaches for clustering data to find hidden groups within the data. Algorithms discussed in this workshop are KMeans, KMedoids, Fuzzy C-Mean, Hierarchical Clustering, and Self Organizing Maps. The workshop also introduces a set of statistical evaluation methods for evaluating the existence of groups in a data set, evaluating the algorithms performance and for assessing a cluster’s stability.
Introduction to Spark: 4 hours
Apache Spark is one of the most popular projects in the Hadoop ecosystem. This workshop provides an overview on the Spark environment, its model and its core data abstractions. It introduces you to the Spark SQL API and Spark Machine Learning Library API. Two practical application examples will be presented in the context of the IBM Watson Data Platform.
Supervised Learning: 5 hours
This workshop introduces Predictive Analytics to answer the question of “What will happen?”. It discusses when and how to use the different predictive Machine Learning algorithms. The workshop covers algorithms in (1) Supervised Learning (Classification and Regression) such as KNN, Decision Trees, Random Forest, Naïve Bayes, Support Vector Machines, Neural Networks, Logistic and Linear Regression and (2) Ensemble Learning techniques such as Bagging, Boosting, Gradient Boosting and Stacking. The workshop also introduces a set of statistical evaluation methods to compare the performance of different algorithms.
Cloud Computing Workshops:
Cloud Computing: 2 hours
The cloud computing workshop gives a brief introduction to the main concepts of cloud computing. The workshop then introduces the IBM Cloud and goes over the steps needed to start working in this environment including creating an account, understanding the dashboard and creating and deploying an application in the IBM Cloud.
Chatbots – Smart Customer Service: 6 hours
Chatbots have grown in popularity over the past couple of years. Technology advancements and Artificial Intelligence accomplishments have simplified the seemingly daunting task of creating a chatbot. Today, anyone can create one. Companies are quickly adopting chatbots as a tool to improve their costumer service, which not only benefits their clients, but also their own teams. This workshop is an introduction to chatbots and their capabilities, using the IBM Watson Assistant platform in combination with other cloud technologies. The workshop explores the entire process, from ideation to deployment, of a chatbot solution. This workshop is also available in as an in-depth 2-day workshop (12 hours).
Programming Language Workshops:
Analysis Pipelines with Python: 6 hours
Python is perhaps the most versatile programming language in existence, and sees widespread use in every field of modern computing. This tutorial focuses on Python for high-performance computing applications, and includes topics on performance optimization, parallel programming, and pipelining. The second part focusses on using Python to (easily) write and scale massively parallel data analysis pipelines across a cluster.
Systems Tool and Data Workshops:
Databases and SQL: 4 hours
A relational database is a common way to store and manipulate information, especially in business and corporate environments. Databases include powerful tools for search and analysis, and can handle large, complex data sets. This lesson is focused on teaching the basics of using, manipulating, and creating databases using SQLite as a teaching tool.
The UNIX Shell: 4 hours
This class serves as an introduction to Linux, the UNIX-like operating system that runs on almost all high-performance computing systems. It is intended for users who have little or no experience with UNIX or Linux. The focus is on the common bash shell. We cover material that helps the user to develop an understanding of the Linux command-line environment, which is necessary for successful use of Unix.
Version control with Git: 4 hours
Version control is a method of intelligently managing code for any project, enabling programmers to collaborate, keep track of changes, track down bugs, and maintain multiple versions/backups of their software. During this tutorial, students learn the basics of version control using Git, as well as how to host and collaborate on coding projects with online services like GitHub.
High Performance Computing Workshops:
Introduction to High-Performance Computing: 2 hours
This workshop is an introduction to using high-performance computing systems effectively. We obviously can’t cover every case or give an exhaustive course on parallel programming in just 2 hours of teaching time. Instead, this workshop is intended to give students a good introduction and overview of the tools available and how to use them effectively. By the end of this workshop, students will know how to use the UNIX command line to operate a computer, connect to a cluster, write simple shell scripts, submit and manage jobs on a cluster using a scheduler, transfer files, and use software through environment modules.
Prices are available upon request. Contact us here