Follow
Neslihan Iskender
Title
Cited by
Cited by
Year
Reliability of human evaluation for text summarization: Lessons learned and challenges ahead
N Iskender, T Polzehl, S Möller
Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), 86-96, 2021
392021
Best practices for crowd-based evaluation of German summarization: Comparing crowd, expert and automatic evaluation
N Iskender, T Polzehl, S Möller
Proceedings of the First Workshop on Evaluation and Comparison of NLP …, 2020
222020
Internal Crowdsourcing in Companies: Theoretical Foundations and Practical Applications
H Ulbrich, M Wedel, HL Dienel
Springer Nature, 2021
132021
Argument mining in tweets: Comparing crowd and expert annotations for automated claim and evidence detection
N Iskender, R Schaefer, T Polzehl, S Möller
International Conference on Applications of Natural Language to Information …, 2021
122021
Towards a Reliable and Robust Methodology for Crowd-Based Subjective Quality Assessment of Query-Based Extractive Text Summarization
S Iskender, Neslihan and Polzehl, Tim and Möller
Proceedings of The 12th Language Resources and Evaluation Conference (LREC …, 2020
12*2020
Does Summary Evaluation Survive Translation to Other Languages?
S Braun, O Vasilyev, N Iskender, J Bohannon
Proceedings of the 2022 Conference of the North American Chapter of the …, 2022
42022
Towards Personalization by Information Savviness to Improve User Experience in Customer Service Chatbot Conversations
T Polzehl, Y Cao, V Carmona, X Liu, C Hu, N Iskender., A Beyer, S Möller
Proceedings of the 17th International Joint Conference on Computer Vision …, 2022
42022
Crowdsourcing versus the laboratory: Towards crowd-based linguistic text quality assessment of query-based extractive summarization
N Iskender, T Polzehl, S Möller
Proceedings of the Conference on Digital Curation Technologies (Qurator 2020 …, 2020
42020
A crowdsourcing approach to evaluate the quality of query-based extractive text summaries
N Iskender, A Gabryszak, T Polzehl, L Hennig, S Möller
2019 Eleventh International Conference on Quality of Multimedia Experience …, 2019
42019
On the impact of self-efficacy on assessment of user experience in customer service chatbot conversations
Y Cao, VIS Carmona, X Liu, C Hu, N Iskender, A Beyer, S Möller, ...
Conversational AI for Natural Human-Centric Interaction: 12th International …, 2022
32022
Device-Type Influence in Crowd-based Natural Language Translation Tasks
M Barz, N Büyükdemircioglu, RP Surya, T Polzehl, D Sonntag
3*2018
Towards Human-Free Automatic Quality Evaluation of German Summarization
N Iskender, O Vasilyev, T Polzehl, J Bohannon, S Möller
arXiv preprint arXiv:2105.06027, 2021
22021
Towards Hybrid Human-Machine Workflow for Natural Language Generation
N Iskender, T Polzehl, S Möller
Proceedings of the First Workshop on Bridging Human–Computer Interaction and …, 2021
22021
An empirical analysis of an internal crowdsourcing platform: IT implications for improving employee participation
N Iskender, T Polzehl
Internal Crowdsourcing in Companies, 103, 2021
22021
Einfluss der Position und Stimmhaftigkeit von verdeckten Paketverlusten auf die Sprachqualität
G Mittag, L Liedtke, N Iskender, B Naderi, T Hübschen, G Schmidt, ...
DAGA, 2019
22019
Internes Crowdsourcing in Unternehmen
M Wedel, H Ulbrich, J Pohlisch, E Göll, A Uhl, N Iskender, T Polzehl, ...
Arbeit in der digitalisierten Welt: Praxisbeispiele und Gestaltungslösungen …, 2021
12021
Hybrid Crowd-Machine Workflow for Natural Language Processing
N Iskender
Technische Universität Berlin, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–17