top of page

Major Awards

​​

  • Rothschild Fellow

  • Offered Fullbright Fellowship

  • Blavatnik Award for exceptional PhDs

  • KLA (2020, 2021, 2022) scholarship for academic achievement and excellence

  • Clore Scholars Programme scholarship for outstanding researchers pursuing PhD

  • M.A. Valedictorian and best thesis award

  • B.A Summa Cum Laude

 

Full list: Semantic | Google Scholar Twitter: @LChoshen 🧵s

twitter-logo_edited.png

Chosen Choshen Publications

  • Shachar Don-Yehia, [long list], Leshem Choshen The Future of Open Human Feedback
    [pdf]

  • FM Polo, L Weber, L Choshen, Y Sun, G Xu, M Yurochkin (ICML) tinyBenchmarks: evaluating LLMs with fewer examples
    [bib][pdf][data][Latex]

  • P Yadav, D Tam, L Choshen, CA Raffel, M Bansal (NeurIPS) Ties-merging: Resolving interference when merging models
    [bib][pdf][code][Latex]

  • A Yehudai, B Carmeli, Y Mass, O Arviv, N Mills, A Toledo, E Shnarch, L Choshen (ICLR) Genie: Achieving human parity in content-grounded datasets generation

  • [bib][pdf][code][Latex]

  • S Don-Yehiya, L Choshen, O Abend () Learning from Naturally Occurring Feedback

  • [bib][pdf][data][Latex]

  • Shachar Don-Yehia, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem Choshen (ACL2023) ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning
    [bib][pdf][model][Latex]

  • Ella Neeman, Roee Aharoni, Or Honovich, Leshem Choshen, Idan Szpektor, Omri Abend (ACL AC Best paper) DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering
    [
    bib][pdf][code][Latex][site]

  • Leshem Choshen, Elad Venezian, Shachar Don-Yehia, Noam Slonim, Yoav Katz Where to start? Analyzing the potential value of intermediate models
    [bib][pdf][code][Latex][site]

  • Leshem Choshen, Elad Venezian, Noam Slonim, Yoav Katz Fusing finetuned models for better pretraining
    [bib][pdf][code][Latex]

  • Leshem Choshen, Guy Hacohen, Daphna Weinshall, Omri Abend (ACL2022) The Grammar-Learning Trajectories of Neural Language Models
    [bib][pdf][code][Latex]

  • Project Debater - an Autonomous Debating System (Nature 2021 cover)
    [bib][pdf][code][Latex

  • Leshem Choshen, Omri Abend (CoNLL 2022) Enhancing the Transformer Decoder with Transition-based Syntax
    [bib][pdf][code][Latex]

  • Leshem Choshen, Lior Fox, Zohar Aizenbud, Omri Abend (ICLR 2020) On the Weaknesses of Reinforcement Learning for Neural Machine Translation
    [bib] [pdf] [code] [Latex

  • Liat Ein-Dor, Eyal Shnarch, Lena Dankin, Alon Halfon, Benjamin Sznajder, Ariel Gera, Carlos Alzate, Martin Gleize, Leshem Choshen, Yufang Hou, Yonatan Bilu, Ranit Aharonov, Noam Slonim (AAAI 2020) Corpus wide argument mining - a working solution
    [bib] [pdf] [code] [Latex

  • Leshem Choshen, Omri Abend (CoNLL 2019) Automatically Extracting Challenge Sets for Non-local Phenomena in Neural Machine Translation
    [bib] [pdf] [code] [Latex]

  • Martin Gleize, Eyal Shnarch, Leshem Choshen, Lena Dankin, Guy Moshkowich, Ranit Aharonov, Noam Slonim (ACL 2019) Are You Convinced? Choosing the More Convincing Evidence with a Siamese Network
    [bib] [pdf] [code] [Latex

  • Yoav Kantor, Yoav Katz, Leshem Choshen*, Edo Cohen-Karlik, Naftali Liberman, Assaf Toledo, Amir Menczel, Noam Slonim (BEA 2019) Learning to combine Grammatical Error Corrections
    [bib] [pdf] [code] [Latex

  • Leshem Choshen & Omri Abend (ACL 2018) Inherent Biases in Reference-based Evaluation for Grammatical Error Correction and Text Simplification
    [bib] [pdf] [code] [Latex]

  • Leshem Choshen, Lior Fox, Yonatan Loewenstein (ICLR 2018) DORA The Explorer: Directed Outreaching Reinforcement Action-Selection
    [bib] [pdf] [code] [Latex]

  • Leshem Choshen & Omri Abend (NAACL-HLT 2018) Reference-less Measure of Faithfulness for Grammatical Error Correction
    [bib] [pdf] [code] [Latex]

 
 
 
 
 
Errata

​​

  • Automatic Metric Validation for Grammatical Error Correction - Alignment is node alignment and not edge alignment. It is equivalent up to the top nodes of the DAG which should be aligned with each other.

bottom of page