Reliability of human evaluation for text summarization: Lessons learned and challenges ahead N Iskender, T Polzehl, S Möller Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), 86-96, 2021 | 39 | 2021 |
Best practices for crowd-based evaluation of German summarization: Comparing crowd, expert and automatic evaluation N Iskender, T Polzehl, S Möller Proceedings of the First Workshop on Evaluation and Comparison of NLP …, 2020 | 22 | 2020 |
Internal Crowdsourcing in Companies: Theoretical Foundations and Practical Applications H Ulbrich, M Wedel, HL Dienel Springer Nature, 2021 | 13 | 2021 |
Argument mining in tweets: Comparing crowd and expert annotations for automated claim and evidence detection N Iskender, R Schaefer, T Polzehl, S Möller International Conference on Applications of Natural Language to Information …, 2021 | 12 | 2021 |
Towards a Reliable and Robust Methodology for Crowd-Based Subjective Quality Assessment of Query-Based Extractive Text Summarization S Iskender, Neslihan and Polzehl, Tim and Möller Proceedings of The 12th Language Resources and Evaluation Conference (LREC …, 2020 | 12* | 2020 |
Does Summary Evaluation Survive Translation to Other Languages? S Braun, O Vasilyev, N Iskender, J Bohannon Proceedings of the 2022 Conference of the North American Chapter of the …, 2022 | 4 | 2022 |
Towards Personalization by Information Savviness to Improve User Experience in Customer Service Chatbot Conversations T Polzehl, Y Cao, V Carmona, X Liu, C Hu, N Iskender., A Beyer, S Möller Proceedings of the 17th International Joint Conference on Computer Vision …, 2022 | 4 | 2022 |
Crowdsourcing versus the laboratory: Towards crowd-based linguistic text quality assessment of query-based extractive summarization N Iskender, T Polzehl, S Möller Proceedings of the Conference on Digital Curation Technologies (Qurator 2020 …, 2020 | 4 | 2020 |
A crowdsourcing approach to evaluate the quality of query-based extractive text summaries N Iskender, A Gabryszak, T Polzehl, L Hennig, S Möller 2019 Eleventh International Conference on Quality of Multimedia Experience …, 2019 | 4 | 2019 |
On the impact of self-efficacy on assessment of user experience in customer service chatbot conversations Y Cao, VIS Carmona, X Liu, C Hu, N Iskender, A Beyer, S Möller, ... Conversational AI for Natural Human-Centric Interaction: 12th International …, 2022 | 3 | 2022 |
Device-Type Influence in Crowd-based Natural Language Translation Tasks M Barz, N Büyükdemircioglu, RP Surya, T Polzehl, D Sonntag | 3* | 2018 |
Towards Human-Free Automatic Quality Evaluation of German Summarization N Iskender, O Vasilyev, T Polzehl, J Bohannon, S Möller arXiv preprint arXiv:2105.06027, 2021 | 2 | 2021 |
Towards Hybrid Human-Machine Workflow for Natural Language Generation N Iskender, T Polzehl, S Möller Proceedings of the First Workshop on Bridging Human–Computer Interaction and …, 2021 | 2 | 2021 |
An empirical analysis of an internal crowdsourcing platform: IT implications for improving employee participation N Iskender, T Polzehl Internal Crowdsourcing in Companies, 103, 2021 | 2 | 2021 |
Einfluss der Position und Stimmhaftigkeit von verdeckten Paketverlusten auf die Sprachqualität G Mittag, L Liedtke, N Iskender, B Naderi, T Hübschen, G Schmidt, ... DAGA, 2019 | 2 | 2019 |
Internes Crowdsourcing in Unternehmen M Wedel, H Ulbrich, J Pohlisch, E Göll, A Uhl, N Iskender, T Polzehl, ... Arbeit in der digitalisierten Welt: Praxisbeispiele und Gestaltungslösungen …, 2021 | 1 | 2021 |
Hybrid Crowd-Machine Workflow for Natural Language Processing N Iskender Technische Universität Berlin, 2023 | | 2023 |