How do causes lead to effects? Can you associate the cause leading to the observed effect? Big Data opens the doors for us to be able to answer questions such as this, but before we are able to do so, we must dive into the field of Causal Inference, a field championed by Judea Pearl. In this series of blog posts we will learn about the main ideas of Causality by working our way through “Causal Inference In Statistics” a nice Primer co-authored by Pearl himself.
Amazon Affiliate Link: https://amzn.to/3gsFlkO
The book is divided into Four chapters. The first chapter covers background material in probability and statistics. The other three chapters are (roughly) organized to match the “Three steps” in the ladder of causality as defined by Pearl:
- — Association
- — Intervention
- — Counterfactuals
In this series of blog posts we will cover most of the content of the book, with a special emphasis on the parts that I believe are more interesting or relevant to practical applications. In addition to summarizing and explaining the content, we will also explore some of the ideas using simple (or as simple as possible) Python code you can run on Binder:
1.2 - Simpson's Paradox -- 1.2 - Simpson's Paradox.ipynb
1.3 - Probability Theory -- 1.3 - Probability and Statistics.ipynb
1.4 - Graphs -- 1.4 - Graphs.ipynb
1.5 - Structural Causal Models -- 1.5 - Structural Causal Models.ipynb
2.2 - Chains and Forks -- 2.2 - Chains and Forks.ipynb
2.3 - Colliders -- 2.3 - Colliders.ipynb
2.4 - d-separation -- 2.4 - d-separation.ipynb
For a more in-depth analysis, checkout Pearl's more technical book:
Amazon Affiliate Link: https://amzn.to/2OSBP6u