Life in Library and Lab

Life in the Library and Lab

Science is often described as a journey of discovery, but for many of us, it is also a journey of resilience, ambition, and purpose. My path through the world of research has not only been marked by equations, datasets, and algorithms but also by the pursuit of meaning, how knowledge created in the quiet corners of a library or the intensive hours of a lab can ripple outward to shape society. However, from the beginning of my academic life, I found myself drawn to spaces where ideas lived and breathed: the library, where centuries of thought could be accessed with a turn of a page, and the lab, where curiosity could be translated into experimentation and proof. These two spaces, one rooted in reflection, the other in action, have shaped the essence of my scientific and academic journey.

The Library: The Beginning of Ideas

Libraries taught me the discipline of sitting with complex questions. As a student navigating courses in computer science, pattern recognition, and machine learning, I spent hours with foundational texts, academic papers, and research manuals. The library was not just a place to collect references for exams; it was a place of quiet combat. It was there that I wrestled with questions like: How can deep learning systems go beyond accuracy and become trustworthy? How do we make algorithms explainable to humans while maintaining their predictive power? And how can data-driven systems address not just computational problems but real human needs, like disease diagnosis or online safety? The answers were never simple, but the process of struggling with them gave me clarity of purpose. The library instilled patience and nurtured my ambition to move beyond consuming knowledge to creating it.

The Lab: The Ground of Action

If the library is where I dream, the lab is where I fight. Every research project I’ve undertaken, whether it was toxic comment detection with extended LSTMs, real-time anomaly detection for cyberthreats, or PCOS diagnosis with extreme gradient boosting, has been born out of a problem I saw in the world that demanded more than theory. The lab is not always glamorous. It is long nights debugging code that refuses to converge, data pipelines collapsing under imbalance, or models overfitting despite careful design. It is failing repeatedly, only to come back the next day with a renewed plan. But the lab is also where breakthroughs happen. It is where an experiment runs smoothly after weeks of adjustment, or where a new model outperforms a state-of-the-art baseline by a few percentage points, a small but meaningful step in the larger fight toward innovation. One of my proudest lab contributions was my work on an extended LSTM (xLSTM) integrated within a BERT-based deep learning framework was developed to enhance minority-class detection and improve interpretability. I designed a lightweight bidirectional LSTM architecture with attention mechanisms and cosine similarity filtering. Unlike traditional models that faltered in minority-class detection, xLSTM significantly improved fairness in classification by amplifying underrepresented signals. To me, this was not just a technical achievement; it was a statement about the kind of AI I want to build systems that don’t silence the marginalized but amplify fairness.

Achievements: Milestones in the Journey

Achievements are never the final destination; they are milestones that mark progress and resilience. Being honored with the Best Student Project Award, First Rank – ITU (2024), and achieving Class Rank 1 (2025), or securing competitive fellowships such as the Mercedes-Benz Fellowship (2025) and the NSF Grassroots Fellowship (2025), reaffirmed that persistence consistently yields results. Leading award-winning projects at ITU further strengthened this conviction, proving that challenges fuel growth. Competitions also played a pivotal role in shaping my ambition. At the English Language Olympiad (2023), I ranked 1st nationally and 5th in Southeast Asia, while at the World Science & Mathematics Olympiad (TÜBİTAK, 2023), I scored among the top globally. These were more than victories; they became training grounds where I cultivated resilience, sharpened problem-solving skills, and built the intellectual stamina demanded by serious research. Yet, the most meaningful achievements extended beyond personal recognition. As a Microsoft Student Ambassador, I organized campus technical events and workshops that empowered peers to explore innovation hands-on. As the Founder of Cyforb AI, I created a platform dedicated to advancing ethical and impactful AI research. These initiatives reminded me that science is never a solitary pursuit; it thrives when knowledge is shared, tested, and transformed into solutions that benefit society.

Fighting for Societal Impact

The question that drives me is simple but profound: How can science serve society better?

Geoffrey Hinton, often called the Godfather of AI, once stated that artificial intelligence carries both immense potential and significant risks. This balance between promise and responsibility has shaped my research philosophy. I have consistently pursued not only technical robustness but also trustworthiness, fairness, and interpretability in the systems I design. Whether in healthcare, digital governance, or online communities, my work seeks to address both computational and ethical questions. Take, for example, my research on PCOS diagnosis. Polycystic Ovary Syndrome is a hormonal disorder affecting millions of women worldwide, yet it is often underdiagnosed or misdiagnosed. By optimizing feature selection and leveraging ensemble learning strategies, my project demonstrated how machine learning can enable earlier and more accurate diagnosis particularly in data-scarce environments where traditional methods fall short. Similarly, my work on real-time anomaly intrusion detection in wireless networks reflects a parallel concern: as cyber threats increase, building intelligent systems capable of instantly identifying malicious behavior is not just a technical challenge but a societal necessity. To me, every dataset and every algorithm carries a human story of someone who might benefit from accurate detection, timely intervention, or fairer treatment. This belief keeps me grounded. It reminds me that my work in the lab is never just about models; it is about the lives and communities those models are built to serve.

Ambition: Toward Next-Generation AI

Looking ahead, my ambition is to extend my research at the Ph.D. level by integrating representation learning, causal inference, and fairness-aware modeling. I envision building AI systems that are transparent, accountable, and socially responsible while maintaining state-of-the-art performance. My long-term vision is to contribute to next-generation AI architectures that don’t just maximize accuracy but embody values of reliability, fairness, and interpretability. I see healthcare systems that explain diagnoses to both doctors and patients in understandable terms; I see cybersecurity systems that can detect anomalies while adapting to evolving threats; and I see governance systems where algorithms can be audited for fairness and bias. This ambition is not born of abstract theory but of the countless hours in the library and lab where I learned, failed, and created. It is the continuation of a journey that began with curiosity and now pushes forward with responsibility.

Life Between Pages and Experiments

The rhythm of my life continues to oscillate between the library and the lab. In the library, I cultivate ideas, reading across disciplines to broaden my perspective. In the lab, I test those ideas, turning hypotheses into prototypes and prototypes into impactful contributions. Together, these two spaces form the crucible in which my scientific identity is forged. Science, to me, is not just a career. It is a fight, a fight against ignorance, a fight for fairness, and above all, a fight for impact. Every experiment is an act of hope, every paper a step forward, and every recognition a reminder that we can change the world through persistence and knowledge. I stand as a researcher shaped by both ambition and humility. My story is one of endless nights in front of datasets, days buried in research papers, and years of striving toward excellence. But more than that, it is the story of someone who believes that the library and the lab are not separate worlds; they are the twin pillars on which impactful science is built. And so, I continue my journey. With ambition as my compass, resilience as my companion, and societal impact as my destination, I remain committed to the lifelong pursuit of knowledge and its power to transform the world.

NoorAI Lab

About the Lab: NoorAI Lab is a research-driven initiative focused on advancing trustworthy, explainable, and fairness-aware AI. The lab bridges theory and practice by developing scalable AI systems in real-time deep learning and natural language processing (NLP). By combining cutting-edge research with practical applications, NoorAI Lab aims to create AI architectures that are robust, interpretable, and socially responsible, addressing critical challenges in healthcare, security, smart cities, and digital governance.

Vision: NoorAI Lab envisions becoming a global hub for ethical and reliable AI research, pioneering solutions that empower society while maintaining transparency, fairness, and accountability. The lab seeks to shape the next generation of AI systems capable of impactful, real-world deployment across diverse domains.

Mission: The mission of NoorAI Lab is to advance trustworthy AI through the design of models that are accurate, transparent, and fairness-aware. The lab focuses on translating research in deep learning, NLP, and representation learning into robust, deployable systems. All projects integrate ethical considerations, causal inference, and fairness-aware evaluation to ensure responsible AI deployment. NoorAI Lab also emphasizes collaboration with students, researchers, industry, and policymakers, driving innovation and tackling high-impact challenges.

Research Themes: NoorAI Lab focuses on five core research areas:

  • Trustworthy NLP: Toxic comment detection, neural machine translation, bias mitigation, and explainable AI.
  • Real-Time & High-Dimensional Deep Learning: xLSTM, CNN, DNN, RNN, Transformers, ensemble methods, and anomaly detection for safety-critical systems.
  • Algorithmic Fairness & Responsible AI: Fairness-aware modeling, causal inference, interpretability, and ethical deployment strategies.
  • Quantum-Inspired & Advanced Computational Models: Hybrid quantum-classical algorithms, representation learning, and optimization for next-generation AI architectures.
  • AI for Societal Transformation: Predictive analytics, spatiotemporal modeling, and AI-driven urban and economic planning for smart cities.

Current Lab Projects: NoorAI Lab translates its vision into action through a portfolio of applied research projects. These include toxic comment classification using XLSTM and high-dimensional feature extraction for scalable online safety, robust and federated deep learning models that are privacy-preserving and adversarial resilient, and dual-priority scheduling in real-time systems to enhance performance in safety-critical environments. The lab also explores quantum computing feasibility studies for hybrid algorithms in cryptography and optimization, as well as AI-driven Forward Vision Dhaka, which applies predictive analytics and optimization to sustainable urban planning.

Core Values: NoorAI Lab is guided by four fundamental values:

  • Integrity: Upholding ethical and fairness standards in AI research.
  • Innovation: Developing cutting-edge methodologies in AI, NLP, and computational systems.
  • Collaboration: Engaging with academia, industry, and policymakers to maximize research impact.
  • Impact: Ensuring that all research delivers tangible societal, industrial, and scientific benefits.

 Lab Slogan: “Building Trustworthy, Explainable, and Responsible AI for Real-World Impact.”