Skip to content
View davidguzmanp's full-sized avatar

Highlights

  • Pro

Block or report davidguzmanp

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
davidguzmanp/README.md

David Guzman Piedrahita

๐ŸŒ Zรผrich, Switzerland | ๐ŸŽ“ MSc in AI @ University of Zรผrich | ๐Ÿฅ Medical AI Research Intern @ YUUNIQ Health

About Me

Highly driven ML researcher and engineer with expertise in AI safety, LLM evaluation, and multi-agent systems. Currently working on AI safety research as well as medical AI applications. Published researcher with proven track record in both academic and industry settings, focusing on understanding and improving the behavior of large language models in complex scenarios.

๐ŸŽฏ Are you looking for a versatile AI researcher who can:

  • Conduct rigorous AI safety research and LLM evaluation studies?
  • Design and implement multi-agent systems and simulations?
  • Develop RAG systems and vector search solutions for complex data?
  • Bridge the gap between research breakthroughs and production systems?
  • Deliver measurable improvements in AI system performance and safety?

With experience spanning AI safety research, multi-agent systems, adversarial robustness, and medical AI applications, I bring a unique blend of theoretical depth and practical implementation skills.

๐Ÿš€ Current Focus

  • AI Safety & Evaluation: Multi-agent simulations to understand LLM behavior in social scenarios
  • LLM Behavioral Research: Investigating reasoning, moral decision-making, and political biases in language models
  • Optimizing RAG pipelines for healthcare data (blood biomarkers, wearables)
  • Multi-agent LLM systems research for complex decision-making scenarios
  • Adversarial robustness in NLP systems
  • LLM evaluation and monitoring infrastructure

๐Ÿ’ผ Recent Experience

  • Medical AI Research Intern @ YUUNIQ Health (June 2025 - Present)

    • RAG optimization for structured/unstructured health data
    • GenAI techniques for personalized medicine
    • MongoDB database management for large-scale health data
  • NLP Engineer @ Frojigo AI Startup (Oct 2024 - Feb 2025)

    • Automated sustainability report generation using LLMs
    • RAG systems with privacy-preserving evaluation methods
  • Research Assistant @ University of Zรผrich (Mar - Aug 2024)

    • Fine-tuned large-scale LLMs on Cerebras chips
    • Achieved 68% improvement in domain-specific performance

๐Ÿ›  Tech Stack

Languages & Frameworks: Python, R, SQL, LaTeX, PyTorch, TensorFlow, Hugging Face Transformers, LangChain/LangGraph

Specialized Areas: AI Safety Research, LLM Evaluation, Multi-Agent Systems, RAG Systems, Vector Search, Prompt Engineering, NLP Adversarial Attacks, Fine-tuning, Embeddings, AI Behavioral Analysis

Databases & Tools: PostgreSQL, MongoDB, Neo4j, Git, Shell scripting

๐Ÿ“š Publications & Research

  • [COLM 2024] "Corrupted by Reasoning: Reasoning Language Models Become Free-Riders in Public Goods Games" - Conference on Language Modeling

    • Multi-agent simulation study revealing how reasoning can lead to free-riding behavior in LLMs
  • [CLEF 2024] "TextTrojaners at CheckThat! 2024: Robustness of Credibility Assessment with Adversarial Examples through BeamAttack" - CLEF Working Notes

    • Novel adversarial attack algorithm achieving 2.8x-3.3x improvement over traditional methods
  • "Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models" (arXiv)

    • Novel methodology for assessing LLM alignment with democracy-authoritarianism spectrum
  • "Are Language Models Consequentialist or Deontological Moral Reasoners?" (arXiv)

    • Taxonomy of moral rationales for systematically classifying LLM reasoning traces
  • "When Ethics and Payoffs Diverge: LLM Agents in Morally Charged Social Dilemmas" (arXiv)

    • Introduced MoralSim to evaluate LLM behavior in morally charged prisoner's dilemma and public goods games
  • "Robustness of Misinformation Classification Systems to Adversarial Examples Through BeamAttack" (arXiv)

    • Advanced adversarial attack methodology using beam search for enhanced text generation
  • "LSTM-based Time Series Forecasting for Air Quality" - Bachelor's Thesis, University of Bergamo

    • Applied LSTM networks for air-quality forecasting with LIME-based interpretability analysis

๐Ÿ† Achievements

  • 5.7/6.0 GPA in MSc Computer Science (AI track)
  • Full Academic Scholarship at University of Bergamo (merit-based)
  • Conference Publications in top-tier AI venues (COLM, ACL)
  • Top 1% Faculty Ranking during undergraduate studies
  • 5+ Research Papers under review in leading AI conferences and journals

๐Ÿ“ซ Connect with Me

LinkedIn | Email

Open to opportunities in AI safety research, LLM evaluation, and multi-agent systems! If you're seeking a results-driven AI researcher with expertise in understanding and improving LLM behavior, let's connect!

Popular repositories Loading

  1. SanctSim SanctSim Public

    Jupyter Notebook 12 4

  2. Influence-Maximization-in-Twitter-as-a-Social-Network-Graph Influence-Maximization-in-Twitter-as-a-Social-Network-Graph Public

    Jupyter Notebook 2

  3. Graph-to-Text-LLM-with-dataset-augmentation Graph-to-Text-LLM-with-dataset-augmentation Public

    Jupyter Notebook 2 1

  4. Essentials Essentials Public

    Jupyter Notebook 1

  5. BachelorThesis BachelorThesis Public

    Jupyter Notebook

  6. SCGProject SCGProject Public

    Forked from VinciGit00/SCGProject

    Jupyter Notebook