“Machine Learning Q and AI” Author Interview and Reading Notes

Sophia Yang, Ph.D.
17 min readJul 31

📚 Some people may know that I host a DS/ML book club where we read one book per month. This month, we read ‘Machine Learning Q and AI’ by the esteemed Sebastian Raschka. He’s known for his expertise in ML/AI and is one of the most technical authors I’ve come across.

The book covers 30 important ML/AI questions, offering a high-level understanding of complex technical concepts without delving into code implementations. Personally, I must admit that this book has become one of my all-time favorite ML/AI reads. I recommend it to anyone in the ML/AI field; it’s a must-read.

In this blog post, I’d like to share the super insightful interview we had with the author and my reading notes for those who are interested in this book.

🔗 Book link: https://leanpub.com/machine-learning-q-and-ai

Author Interview

Our DS/ML book club had a wonderful time chatting with Sebastian about this book. Here are some highlights from our discussion:

  • Lottery Ticket Hypothesis: How a small, sparsely-connected subnetwork within a larger neural network achieves comparable performance.
  • Parallelism in Deep Learning: We discussed various types of parallelism in deep learning, such as data parallelism, model parallelism, tensor parallelism, pipeline parallelism, and sequence parallelism. Additionally, we learned how the Fabric library can help scale PyTorch models with ease.
  • Fine-tune LLMs: Our discussion covered fine-tuning techniques for large language models, including popular methods like LoRA and QLoRA, as well as the practicality of Adapter methods. We also compared reinforcement learning with human feedback (RLHF) to regular supervised training.
  • Relevance of XGBoost: Sebastian shared his insights on whether XGBoost remains relevant in the era of deep learning, particularly as a robust baseline for tabular data.
  • Transformers: We explored attention mechanisms, Transformer models, and clarified the roles of “encoder” and “decoder” within Transformers.
  • Quantization Techniques: We discussed…
Sophia Yang, Ph.D.