June 23, 2025
Day 20 - Learning Through Struggle
What I Learned
Today at my internship, I continued making progress on our core research project, which focuses on applying deep learning to ECG signal analysis for cardiovascular disease detection. The majority of my day was spent training and analyzing the performance of our 1D CNN model. I ran multiple epochs and monitored key evaluation metrics such as AUC, precision, and recall. Although the model’s initial results showed moderate improvement, especially in training AUC, the validation metrics still indicate room for optimization. One challenge I faced was the precision and recall values dropping to zero on the validation set during the first epoch, despite a non-zero AUC. This sparked a useful internal discussion on class imbalance and potential overfitting, pushing me to consider implementing stratified sampling or class weighting in future runs. In addition to model training, I spent time reviewing visualizations of training curves. I also reflected on what steps we need to take to get the model’s validation AUC to a stronger level (ideally over 90%). This included planning potential hyperparameter tuning, evaluating data preprocessing choices, and considering alternative architectures or ensemble methods. Overall, today was a solid step in our long-term goal of building a robust hybrid multimodal system. I left the lab feeling more confident in my grasp of model performance metrics and more motivated to push for stronger results in the next few training sessions.
Blockers
No Blockers
Reflection
Today’s work at my internship reminded me that progress isn’t always linear, especially in machine learning. I spent most of the day working on training our 1D CNN model for ECG signal classification. Initially, I was optimistic about the setup, but as the epochs ran, I quickly saw that while the training AUC was improving, the validation precision and recall stayed at zero for the early epochs. That felt a bit discouraging, but it also made me curious. I started thinking more critically about why the model might be failing to generalize. Could it be class imbalance? Overfitting? Issues in the data split? I realized that even though the metrics weren’t ideal, this was an opportunity to learn. I jotted down some ideas for next steps: adjusting the class weights, trying different sampling strategies, and maybe even restructuring the model architecture. I also looked at some of the training graphs and compared them to ideal examples online. It helped me visualize what a well-performing model should look like. Seeing the gap between where we are and where we want to be helped me reframe today’s challenges as part of the journey, not setbacks. All in all, today was more about learning through struggle than celebrating results. But I’m walking away with a better understanding of how to interpret model metrics, how to troubleshoot early-stage performance issues, and how to stay calm when things don’t work perfectly the first time.