The Greatest Turning Point For MSP
From Shambles to Structure: The Rebirth of MSP
When we looked at the state of MSP, it wasn't a secret that things were in shambles. The potential was there, but the execution was drifting. We realized that if we wanted an organization that actually mattered; one that produced engineers rather than just event organizers, we had to burn the old manual and write a new one.
I didn't step into this alone. I came in with my friends & people I trusted not just to work, but to lead. We decided to rebuild the foundation by forming a board that was actually competent, replacing vague responsibilities with a rigid, professional hierarchy.
Here is the story of how MSP and especially Resonance was reimagined, the team that was built to sustain it, and the ambitious roadmap currently in motion.

The Awakening of Resonance
I didn't want to build just another student club; I wanted to create a research-grade environment. My vision for Resonance was specific: an AI committee strictly focused on sound and frequency, exploring the intersection of artificial intelligence and acousticsThe goal was to transform curious learners into skilled audio AI practitioners who could understand how sound works in the real world.
Forming the Competent Board
To achieve this, I couldn't operate alone. I needed to rebuild the structure with a team of friends who were not just capable, but professional. We established a rigid, competent hierarchy to ensure the vision didn't fall apart, We agreed on professional working protocols immediately: strict version control via GitHub, a mandatory Unix/Linux development environment to avoid Windows-specific debugging, and a clear chain of command to keep decisions moving fast.
The Blueprint: What We Are Building Now
With the board in place, we launched a 5-month operational plan divided into two aggressive phases:
Phase 1: The Foundation (Audio Engineering) We are currently stripping everything back to basics. We aren't jumping straight into complex models; we are teaching our members the physics of sound, spectrograms, and the Short-Time Fourier Transform. The goal here is tangible: within the first two months, the team will build a functional Shazam-like audio fingerprinting system from scratch, mastering the algorithms that power real-world audio recognition.
Phase 2: The Deep Dive (Piano Transcription) Once the foundation is solid, we transition to deep learning. We are moving the team from standard machine learning into PyTorch and Neural Networks. The capstone for this season is massive: a Piano Transcription Model. We are building a system that takes raw audio, transcribes it into MIDI, and feeds it into a digital piano simulator that plays the arrangement in real-time.
The Recruitment Strategy
We realized we needed a two-tier approach to talent. First, we opened "Open Enrollment" for anyone with passion and basic computer literacy—we wanted to find hidden gems. But to ensure we hit our technical milestones, we planned a "Second Recruitment" specifically for advanced members with existing Python and ML skills to join us during the heavy lifting of the Deep Learning phase.
The Future: Beyond the Horizon
We are not just looking at the next five months. I have already laid out the path for the next few years. We are fostering a culture of experimentation today so we can tackle massive challenges tomorrow, including:
- Year 2: Building an Egyptian Arabic Text-to-Speech model to solve the lack of dialect support.
- Audio Engineering: Creating adaptive spatial audio systems similar to Dolby Atmos.
- The Ultimate Goal (2027): Bridging audio AI with Radio Frequency engineering to build a transceiver capable of contacting the International Space Station during the NASA Space Apps challenge.
We are building the foundation for audio AI at FCIS. It demands late nights and perseverance, but together, we are turning sound into intelligence.