Getting Started with Multimodal AI, One-Hot Encoding, and Other Beginner-Friendly Guides

Getting Started with Multimodal AI, CPUs and GPUs, One-Hot Encoding, and Other Beginner-Friendly Guides

Feeling inspired to write your first TDS post? We’re always open to contributions from new authors.

Taking the first step towards mastering a new topic is always a bit daunting—sometimes it’s even very daunting! It doesn’t matter if you’re learning about algorithms for the first time, dipping your toes into the exciting world of LLMs, or have just been tasked with revamping your team’s data stack: taking on a challenge with little or no prior experience requires nontrivial amounts of courage and grit.

The calm and nuanced perspective of more seasoned practitioners can go a long way, too — which is where our authors excel. This week, we’ve gathered some of our standout recent contributions that are tailored specifically to the needs of early-stage learners attempting to expand their skill set. Let’s roll up our sleeves and get started!

From Parallel Computing Principles to Programming for CPU and GPU Architectures
For freshly minted data scientists and ML engineers, few areas are more crucial to understand than memory fundamentals and parallel execution. Shreya Shukla’s thorough and accessible guide is the perfect resource to get a firm footing in this topic, focusing on how to write code for both CPU and GPU architectures to accomplish fundamental tasks like vector-matrix multiplication.Multimodal Models — LLMs That Can See and Hear
If you’re feeling confident in your knowledge of LLM basics, why not take the next step and explore multimodal models, which can take in (and in some cases, generate) multiple forms of data—from images to code and audio? Shaw Talebi’s primer, the first part of a new series, offers a solid foundation from which to build your practical know-how.Boosting Algorithms in Machine Learning, Part II: Gradient Boosting
Whether you’ve only recently started your ML journey or have been at it for so long that a refresher might be useful, it’s never a bad idea to firm up your knowledge of the basics. Gurjinder Kaur’s ongoing exploration of boosting algorithms is a great case in point, presenting accessible, easy-to-digest breakdowns of some of the most powerful models out there—in this case, gradient boosting.Photo by Taria Camerino on UnsplashNLP Illustrated, Part 1: Text Encoding
Another new project we’re thrilled to share with our readers? Shreya Rao’s just-launched series of illustrated guides to core concepts in natural language processing, the very technology powering many of the fancy chatbots and AI apps that have made a splash in recent years. Part one zooms in on an essential step in just about any NLP workflow: turning textual data into numerical inputs via text encoding.Decoding One-Hot Encoding: A Beginner’s Guide to Categorical Data
If you’re looking to learn about another form of data transformation, don’t miss Vyacheslav Efimov’s clear and concise introduction to one-hot encoding, “one of the most fundamental techniques used for data preprocessing,” turning categorical features into numerical vectors.Excel Spreadsheets Are Dead for Big Data. Companies Need More Python Instead.
One type of transition that is often even more difficult than learning a new topic is switching to a new tool or workflow—especially when the one you’re moving away from fits squarely within your comfort zone. As Ari Joury, PhD explains, however, sometimes a temporary sacrifice of speed and ease of use is worth it, as in the case of adopting Python-based data tools instead of Excel spreadsheets.

Ready to venture out into other topics and challenges this week? We hope so—we’ve published some excellent articles recently on LLM apps, Python-generated art, AI ethics, and more:

After building LLM-based applications this past year, Satwiki De shares practical insights on how the process diverges from traditional product-development norms.In his latest article, Robert Lange focuses on recent advances in neural-network training, and examines various methods of distributed training, such as data-parallel training and gossip-based averaging.Translating data analysis into valuable business decisions remains a perennial challenge for data professionals. Tessa Xie presents a fresh perspective on this problem—as well as several pragmatic recommendations.Anyone in the mood for a math deep dive should head right over to Reza Bagheri’s latest explainer, which walks us through the inner workings of the all-important softmax function.Having been disappointed by the outputs of generative-AI tools, Anna Gordun Peiro attempts to create Mondrian-inspired artwork using nothing but Python, and documents her process in an easy-to-follow tutorial.When you work with time series data, it’s essential to know whether your outlier treatment has been effective. Sara Nóbrega devotes her latest post to a detailed discussion of the various approaches you can use to evaluate the treatment’s impact.What does it take to create AI ethics and governance frameworks that function at scale? Jason Tamara Widjaja unpacks the challenges of bridging common organizational and implementation gaps.Writing at the intersection of music and AI, Jon Flynn walks us through some of the recent developments in this growing field, and zooms in on the Qwen2-Audio model, which is trained to transcribe musical inputs into sheet music.

Thank you for supporting the work of our authors! As we mentioned above, we love publishing articles from new authors, so if you’ve recently written an interesting project walkthrough, tutorial, or theoretical reflection on any of our core topics, don’t hesitate to share it with us.

Until the next Variable,

TDS Team

Getting Started with Multimodal AI, One-Hot Encoding, and Other Beginner-Friendly Guides was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

Author:

Leave a Comment

You must be logged in to post a comment.