Navigation

  • Home
  • About Me
  • About My Mentors
  • About My Project
  • My Blog

July 21, 2025

Day 40 - Advancing Explainable AI and Preparing for Public Presentation

By Ayomide Jeje

What I Learned

Today marked a meaningful shift in the direction of my internship as I officially began working on the Explainable AI (XAI) component of our ECG diagnostic project. The focus of this new phase is to help users — especially doctors and medical researchers — understand why the deep learning models make certain decisions, rather than just accepting the predictions at face value. This step is critical for building trust in AI-assisted healthcare. I started by reading through research papers and technical guides related to SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations). I also reviewed examples of how these tools have been used in medical applications. The aim is to apply them to our trained models — particularly the 1D CNN and hybrid models — and visualize which parts of the ECG signal contribute most to specific diagnoses like Myocardial Infarction or Conduction Blocks. In parallel, I also contributed to the commercial and outreach preparation for our final project showcase. This included reviewing the current version of our promotional script, providing ideas on how to frame our AI solution for a general audience, and discussing how to visually present our ECGNet system in a compelling, trustworthy way. I started outlining how we might narrate our technical breakthroughs — including preprocessing steps like wavelet denoising and our fusion of multimodal features — in terms that non-experts can understand.

Blockers

No blockers.

Reflection

Today was a shift in both focus and mindset. I officially started working on Explainable AI (XAI) for our ECG diagnostic project — something I’ve been looking forward to. Up until now, the bulk of my work has been around data preprocessing, model training, and performance tuning. But as we move toward real-world application, it’s clear that high accuracy isn’t enough — people need to trust the model, and that means it must be explainable. My focus was on understanding and preparing to implement tools like SHAP and LIME. I spent a good amount of time reading relevant papers and case studies, especially ones applied to medical imaging and time-series data. I started brainstorming how to map model decisions back to specific segments of the ECG signal — essentially asking: What part of this heartbeat made the AI think it was abnormal? That question is now central to the next phase of this project. In a completely different, yet surprisingly connected way, I also began contributing to the commercial side of our work. We’re preparing for a public presentation, and I helped review our early draft for the commercial video. I suggested ways to make the AI system feel both advanced and human — not just technical, but empathetic and trustworthy. I’m realizing that communicating AI to the public is just as much an art as it is a science.

Tags: