June 16, 2025
Day 15- Advancing The ai models
What I Learned
Today, I continued to make significant strides in my summer internship at the Morgan State University AI Summer Research Institute, where I am working on my project, EcgNet. The day was filled with valuable hands-on work and insightful discussions with my professor, which deepened my understanding of the project and enhanced my technical abilities. The morning began with data preprocessing tasks, where I applied wavelet denoising techniques to clean raw ECG signals. This process is essential for ensuring that the data is suitable for training machine learning models. I faced some challenges with different levels of noise in the data, but I managed to resolve these issues using my knowledge of OpenCV and wavelet transforms. This was a crucial step in improving the quality of the dataset for further analysis. After tackling the data preprocessing, I spent time refining our 1D-CNN and 2D-CNN models, which are central to detecting cardiovascular abnormalities in the ECG signals. I adjusted several hyperparameters and experimented with different model architectures. Although the improvements were incremental, they were a step in the right direction, and I’m optimistic about refining these models further over the next few days.
Blockers
No Blockers
Reflection
Today was another valuable day in my internship at the Morgan State University AI Summer Research Institute, where I am working on the EcgNet project. The day was a mix of technical progress and insightful discussions, particularly after meeting with my professor, which allowed me to reflect on my work and how to refine my approach. The day started with data preprocessing, where I applied wavelet denoising techniques to clean the raw ECG signals. At first, I encountered some challenges in managing different levels of noise, which made it difficult to get consistent results. However, as I applied wavelet transforms and worked through the noise, I gained a better understanding of how to handle these issues more efficiently. I’m beginning to realize just how important this stage is for the quality of the data that follows. I then moved on to refining the 1D-CNN and 2D-CNN models that we’re using to detect abnormalities in the ECG signals. Adjusting hyperparameters and model architectures was a bit tricky, but I saw small improvements in performance, which reminded me that deep learning models often require patience and persistence. While the progress felt incremental, I understand that it’s part of the process, and it’s a reminder that breakthroughs usually don’t happen overnight. Later in the day, I had a meeting with my professor, and it turned out to be one of the most enlightening parts of the day. During our conversation, we went over the challenges I had encountered, particularly with the noise in the ECG data. My professor offered suggestions on how to better handle this variability, which helped me refine my approach. We also discussed SHAP (SHapley Additive exPlanations) for model interpretability. This part of the meeting helped me see the bigger picture — not only is it important to have an accurate model, but we also need to ensure the model is interpretable for real-world use, especially in the medical field. I appreciated the way my professor framed the importance of explainability, as it’s not something I had fully grasped before.