Artificial Intelligence
12:15 — 12:45
"An intuitive introduction into Reinforcement Learning"
Eliran Natan
Machine Learning
14:20 — 14:50
"Machine Learning and Marketing, does that fit together?"
Alexander Möllers
14:50 — 15:20
"Apriori Unification Pattern for Efficient ML"
Ravishankar Nair
15:20 — 15:50
"Beyond OCR: Using deep learning to understand documents"
Eitan Anzenberg
15:50 — 16:20
Break - 5 minute
Data Science
10:10 — 10:40
"Intro to Pydantic, Run-Time Type Checking For Your Dataclasses"
Alexander Hultnér
10:40 — 11:10
"Personalising Dinner Using Python!"
Irene Iriarte Carretero
11:10 — 11:40
"The Power of A/B Testing"
Bishal Agrawal
11:40 — 12:10
Break - 5 minute
10:00 — 10:10
Entry-level track
Gives access to Junior track only with no recordings. Focuses on entry-level content around Python.
13:45 — 14:15
Data Science
Artificial Intelligence
14:50 — 15:30
"A Dive into Hyperparameter Optimization in Machine Learning"
Tanay Agrawal
15:30 — 16:10
"Build your Machine Learning models the easy way with SPSS"
Mohammad Fawaz SIddiqi
Anam Mahmood
16:50 — 17:30
Break - 10 minute
Machine Learning
12:15 — 12:40
"An E-commerce Transformer-based Decision-making Recommender
Denis Rothman
Break - 5 minute
10:10 — 10:40
"Interactive Knowledge Graph Visualization in Jupyter Notebook"
Cheuk Ho
10:40 — 11:10
""Who can help me?": Knowledge Infused Matching of Support Seekers and Support Providers on Social Media"
Manas Gaur
10:00 — 10:10
Full Access
GiveGives access to Junior track only with no recordings. Focuses on entry-level content around Python.
11:40 — 12:10
"Using Machine Learning to Predict Drug-Drug Interactions in Medicine"
16:10 — 16:50
Calen Tang
Asher Ng Jing Jie
14:00 — 14:40
17:30 — 18:10
SwiftUI + Python = Magic
In this session Max will show you how to mix-and-match a TensorFlow model with SwiftUI to create a magical experience on iOS. While some familiarity with Swift might be nice to have, this session has been specifically built for Python programmers!
Intro to Pydantic, Run-Time Type Checking For Your Dataclasses
Want static type checking in run time? Want to use standard python type annotations? Want compatibility with standard python dataclasses? Then it sounds like pydantic is something for you. Pydantic offers a pythonic way to validate your user data using run-time enforced standard type-annotations.

This talk focuses on how Pydantic can be used with web APIs to simplify many parts regarding user input validation. I've previously back in early 2018 built a similar solution to Pydantic based upon standard dataclasses for a large B2B SaaS application built with flask. When I left that project I was briefly considered rebuilding it as open source but while doing my research I discovered Pydantic's powers which I had put in my keep tabs on list when it was in an much earlier stage, but at this point it had evolved to a really polished library and a perfect companion for JSON-based APIs.
The Power of A/B Testing
In the data driven world, it is becoming harder to take decisions because of the information overload and data noise. What should we do if we are not sure about the impact of the change we are bringing on our platform? Here A/B test comes in between to help us decide if this change actually had an impact, the impact could be positive, negative or neutral.
An intuitive introduction into Reinforcement Learning
Driven by rewards and guided by their own life experience, AI agents are much like us. They can be clueless or wise, playful or mature, newbies, or pros. In this session, you will understand how those attributes can be formalized as code using Reinforcement Learning. In this session, you will understand how such attributes can be expressed in code, as we explorer the fundamental ideas behind Q-learning and Deep RL. Eventually, we use TensorFlow to solve a well familiar problem in this area.
Machine Learning and Marketing, does that fit together?
Ready to be taken on a journey through the world of data-driven marketing?

Well, when I started working in my current position I was not. In fact, it seemed odd
to me that in a world where the amount of data available grows by the day companies still struggle with something as seemingly simple as optimising their marketing activity. Nowadays I know, I was just stupid and it is not simple at all. The sheer complexity of information that arises from the interplay of factors as diverse as customer interactions, competitive dynamics and the long-term company strategy makes it difficult to find a one-size-fits-all solution.

Let me guide you from Markov Chain Attribution to Marketing Mix Modelling through the world of mathematical marketing and open the door for further exploration.
Apriori Unification Pattern for Efficient ML
There is obviously a need to combine multiple data sources (structured, unstructured or real time) for AI and ML training. In almost all the ML and Data tools currently in the industry, we bring data from each polyglot source individually to the consuming tool (e.g Python or Julia and R) and then do the joining of data. This causes considerable use of memory just to store data and process he join. We can create a apriori layer in front of these tools to do the heavy lifting of data and then consume through simple SQL - thus using our AI and ML platform for the compute and model training, rather than storing the data frames. The talk shows an indepth analysis of managing network, unification of polyglot persistence and seamless access of all kinds of data for the faster and efficient processing of ML applications.
Beyond OCR: Using deep learning to understand documents
Extracting key-fields from a variety of document types remains a challenging machine learning problem. Services such as AWS and Google Cloud provide text extraction products to "digitize" images or pdfs. These return phrases, words and characters with their corresponding coordinate locations. Working with these outputs remains challenging and unscalable as different document types require different heuristics with new types uploaded daily. Furthermore, a performance ceiling is reached even if algorithms work perfectly equaling the accuracy of the service OCR.

We propose an end-to-end scalable solution utilizing deep learning and OCR architecture to automatically extract important text-fields from documents. Computer vision algorithms utilizing deep learning produce state-of-the-art classification accuracy and generalizability through training on millions of images. Region proposals are generated by off-the-shelf OCRs including Tesseract. We compare the in-house model accuracy with 3rd party OCR services. is working to build a paperless future. We parse through millions of documents a year ranging from invoices, contracts, receipts and a variety of other types. Understanding those documents is critical to building intelligent products for our users.
Personalising Dinner Using Python!
This talk will describe how Gousto, a leading recipe box service based in the UK, is using python to build a personalisation ecosystem. Our menu planning optimisation algorithm allows us to create the perfect mix of recipes, ensuring a variety of dish types, cuisines and ingredients. Our recommendation engine sitting on top of this can then offer each customer a personally curated menu, making sure that all users have meaningful choice. All this while ensuring that we are also optimising for maximum performance from an operational point of view!

The talk will give an overview of our methods, our infrastructure, our results and everything that we have learnt along the way.
Interactive Knowledge Graph Visualization in Jupyter Notebook
Need a nice knowledge graph visualization? Graphviz is not interactive and is difficult to customize. D3 is interactive but I don't wanna write JavaScript. Having a Python library that makes interactive graph visualization in Jupyter notebook is like a dream and we will show you how it came true.
"Who can help me?": Knowledge Infused Matching of Support Seekers and Support Providers on Social Media
During a crisis such as COVID-19, through social media, effective community management has been realized by first identifying users with concerns and users that can provide support and then connecting (matching) them. The diverse user perspectives help individuals seek informative care through experience or action-oriented information. However, the matching of a user with a concern, a support seeker (SS), and a user with relevant experience, a support provider (SP) is the responsibility of human moderators, who scan the posts and use their medical expertise to find suitable matches. Thus, an automated system that captures medical knowledge implicit in the posts to match a support seeker with a support provider efficiently can significantly help both users and moderators, especially during a crisis. In this talk, I will describe the procedure for explainable data creation, contextualization, abstraction, and matching algorithm that would efficiently recommend SPs to SS in real-time.
An E-commerce Transformer-based Decision-making Recommender
Transformer-based decision-making recommenders for E-commerce are taking over older and obsolete algorithms such as RNNs. E-commerce is primarily based on supply chain management. When we purchase a product online, it immediately triggers a chain of events in production, warehouse, and delivery management.
The presentation starts by explaining why RNNs are obsolete and the architecture of Transformers. The presentation dives into Python source code to show how to generate MDP sequences in Python that take product delivery constraints into account. A transformer-based recommender Python notebook will predict decisions. Finally, a Python program will simulate real-time production as soon as a consumer purchases a production online.
By the end of the presentation, you will understand why transformers are replacing RNNs and how to use them in real-life.
A Dive into Hyperparameter Optimization in Machine Learning
We'll start with the importance of Hyperparameter in predictive modeling algorithms. Starting with the very basic HPO algorithms like grid search and random search, we'll jump to advanced Sequential Model based Bayesian Optimization(SMBO) algorithms. We'll discuss a little mathematics behind SMBOs and have a hands on session on libraries like Hyperopt and Optuna.
Build your Machine Learning models the easy way with SPSS
This tutorial explains how to graphically build and evaluate machine learning models by using the SPSS Modeler flow feature in IBM® Watson™ Studio. IBM Watson SPSS Modeler flows in Watson Studio provide an interactive environment for quickly building machine learning pipelines that flow data from ingestion to transformation to model building and evaluation, without needing any code. This tutorial introduces the SPSS Modeler components and explains how you can use them to build, test, evaluate, and deploy models.

Mohammad Fawaz Siddiqi
Anam Mahmood
Using Machine Learning to Predict Drug-Drug Interactions
Drug-drug interactions are an often overlooked aspect of the medical field which can have drastic implications. During the prescription and consumption of drugs, adverse drug reactions may result which have significant impacts on one's health. However, limitations in clinical trials mean that ADRs may only be detected when they happen after approval for clinical use. Hence, to assist in the prediction of DDIs, machine learning algorithms can be used to identify drugs with a high potential to have interactions. Our project uses data from the DrugBank database, including Anatomical Therapeutic Classification codes and Simplified Molecular Line-Entry System codes, as well as the drug interactions. We obtained 2,770 drugs with ATC and SMILES codes as valid drugs for analysis. By extracting interactions of each type into an individual CSV file, we were able to analyse the drug properties of each drug, running KNN, Decision Tree regression and classification, Random Forest regression and classification, and naive Bayes prediction models. The prediction classifiers used compared chemical, therapeutic and interactive similarities of each drug to predict if the test set would have an adverse reaction. We then ran various metrics on the models, finding that Decision Tree produces the best classification and regression model for the prediction of DDIs. While the limitations of our project included lack of fully comprehensive data which resulted in a fairly small sample size, with proper access to information, such a method can be expanded to provide accurate and reliable results.