๐ Zรผrich, Switzerland | ๐ MSc in AI @ University of Zรผrich | ๐ฅ Medical AI Research Intern @ YUUNIQ Health
Highly driven ML researcher and engineer with expertise in AI safety, LLM evaluation, and multi-agent systems. Currently working on AI safety research as well as medical AI applications. Published researcher with proven track record in both academic and industry settings, focusing on understanding and improving the behavior of large language models in complex scenarios.
๐ฏ Are you looking for a versatile AI researcher who can:
- Conduct rigorous AI safety research and LLM evaluation studies?
- Design and implement multi-agent systems and simulations?
- Develop RAG systems and vector search solutions for complex data?
- Bridge the gap between research breakthroughs and production systems?
- Deliver measurable improvements in AI system performance and safety?
With experience spanning AI safety research, multi-agent systems, adversarial robustness, and medical AI applications, I bring a unique blend of theoretical depth and practical implementation skills.
- AI Safety & Evaluation: Multi-agent simulations to understand LLM behavior in social scenarios
- LLM Behavioral Research: Investigating reasoning, moral decision-making, and political biases in language models
- Optimizing RAG pipelines for healthcare data (blood biomarkers, wearables)
- Multi-agent LLM systems research for complex decision-making scenarios
- Adversarial robustness in NLP systems
- LLM evaluation and monitoring infrastructure
-
Medical AI Research Intern @ YUUNIQ Health (June 2025 - Present)
- RAG optimization for structured/unstructured health data
- GenAI techniques for personalized medicine
- MongoDB database management for large-scale health data
-
NLP Engineer @ Frojigo AI Startup (Oct 2024 - Feb 2025)
- Automated sustainability report generation using LLMs
- RAG systems with privacy-preserving evaluation methods
-
Research Assistant @ University of Zรผrich (Mar - Aug 2024)
- Fine-tuned large-scale LLMs on Cerebras chips
- Achieved 68% improvement in domain-specific performance
Languages & Frameworks: Python, R, SQL, LaTeX, PyTorch, TensorFlow, Hugging Face Transformers, LangChain/LangGraph
Specialized Areas: AI Safety Research, LLM Evaluation, Multi-Agent Systems, RAG Systems, Vector Search, Prompt Engineering, NLP Adversarial Attacks, Fine-tuning, Embeddings, AI Behavioral Analysis
Databases & Tools: PostgreSQL, MongoDB, Neo4j, Git, Shell scripting
-
[COLM 2024] "Corrupted by Reasoning: Reasoning Language Models Become Free-Riders in Public Goods Games" - Conference on Language Modeling
- Multi-agent simulation study revealing how reasoning can lead to free-riding behavior in LLMs
-
[CLEF 2024] "TextTrojaners at CheckThat! 2024: Robustness of Credibility Assessment with Adversarial Examples through BeamAttack" - CLEF Working Notes
- Novel adversarial attack algorithm achieving 2.8x-3.3x improvement over traditional methods
-
"Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models" (arXiv)
- Novel methodology for assessing LLM alignment with democracy-authoritarianism spectrum
-
"Are Language Models Consequentialist or Deontological Moral Reasoners?" (arXiv)
- Taxonomy of moral rationales for systematically classifying LLM reasoning traces
-
"When Ethics and Payoffs Diverge: LLM Agents in Morally Charged Social Dilemmas" (arXiv)
- Introduced MoralSim to evaluate LLM behavior in morally charged prisoner's dilemma and public goods games
-
"Robustness of Misinformation Classification Systems to Adversarial Examples Through BeamAttack" (arXiv)
- Advanced adversarial attack methodology using beam search for enhanced text generation
-
"LSTM-based Time Series Forecasting for Air Quality" - Bachelor's Thesis, University of Bergamo
- Applied LSTM networks for air-quality forecasting with LIME-based interpretability analysis
- 5.7/6.0 GPA in MSc Computer Science (AI track)
- Full Academic Scholarship at University of Bergamo (merit-based)
- Conference Publications in top-tier AI venues (COLM, ACL)
- Top 1% Faculty Ranking during undergraduate studies
- 5+ Research Papers under review in leading AI conferences and journals
Open to opportunities in AI safety research, LLM evaluation, and multi-agent systems! If you're seeking a results-driven AI researcher with expertise in understanding and improving LLM behavior, let's connect!

