Lukas Galke

Presentations

Displaying 1 - 9 of 9
  • Dang, A., Raviv, L., & Galke, L. (2024). Testing the linguistic niche hypothesis in large language models with a multilingual Wug Test. Talk presented at the International Conference on the Evolution of Language (Evolang 2024). Madison, WI, USA. 2024-05-18 - 2024-05-21.
  • Dang, A., Raviv, L., & Galke, L. (2024). Harnessing the morphological capabilities of large language models with a multilingual wug test. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
  • Galke, L., Ram, Y., & Raviv, L. (2024). Learning pressures and inductive biases in emergent communication: Parallels between humans and deep neural networks. Talk presented at the International Conference on the Evolution of Language (Evolang 2024). Madison, WI, USA. 2024-05-18 - 2024-05-21.
  • Galke, L., Ram, Y., & Raviv, L. (2023). What makes a language easy to deep-learn?. Talk presented at Protolang 8. Rome, Italy. 2023-09-27 - 2023-09-28.
  • Galke, L., Ram, Y., & Raviv, L. (2022). Emergent communication for understanding human language evolution: What's missing?. Talk presented at the Tenth International Conference on Learning Representations (ICLR 2022). online. 2022-04-25 - 2022-04-29.
  • Galke, L., & Scherp, A. (2022). Bag-of-words vs. graph vs. sequence in text classification: Questioning the necessity of text-graphs and the surprising strength of a wide MLP. Talk presented at the 60th Annual Meeting of the Association for Computational Linguistics. Dublin, Ireland. 2022-05-22 - 2022-05-27.
  • Galke, L. (2022). Emergent communication and language evolution: What's missing?. Talk presented at the Machine Learning and the Evolution of Language workshop at the Joint Conference on Language Evolution (JCoLE). Kanazawa, Japan. 2022-09-05 - 2022-09-08.
  • Galke, L., Ram, Y., & Raviv, L. (2022). Emergent communication for understanding human language evolution: What's missing?. Poster presented at the IMPRS Conference 2022, Nijmegen, The Netherlands.
  • Galke, L., Cuber, I., Meyer, C., NoĢˆlscher, H. F., Sonderecker, A., & Scherp, A. (2022). General cross-architecture distillation of pretrained language models into matrix embedding. Talk presented at the IEEE Joint Conference on Neural Networks (IJCNN 2022), part of the IEEE World Congress on Computational Intelligence (WCCI 2022). Padua, Italy. 2022-07-18 - 2022-07-23.

Share this page